Hacker News new | comments | ask | show | jobs | submit login
A Few Thoughts on Ray Ozzie’s “Clear” Proposal (cryptographyengineering.com)
104 points by Hagelin 9 months ago | hide | past | web | favorite | 72 comments



The work I did on mobile encryption was framed thusly:

- Deriving a key for all devices from a single key creates a single, catastrophic failure mode for the solution where all devices become vulnerable together. As soon as customers figure this out, nobody serious will adopt it because they can't afford to accept that known risk exposure.

- We're assuming that the HSM we're using doesn't have a bias in its key generation RNG to limit the real key space, because if I were an intel agency, that's probably the first lever I would pull.

- The entropy of the additional derivation components we can source from the individual device to locally diversify keys is really limited, and some really smart people are going to be reversing our code. Apple (and unrelated, in my own work, I never worked for anyone affiliated with them) relied on limiting number of attempts in hardware (effectively) to mitigate this risk.

Personally, I think the Ozzie proposal is a red herring to give the feds rhetorical leverage by providing their side with something few people understand, but can get behind politically because it's sufficiently complex as to be "our" magic vs. "their" magic. This is to drown out technical objections and make the problem a political one where they can use their leverage.

As The author (Green) notes, we can design some pretty crazy things, and if the feds came out and said, "build us a ubiquitous surveillance apparatus, or at least give us complete sovereign and executive control of all electronic information." that is technically solvable problem, but in the US, legally intractable. So instead, they want those effective powers without the overt mandate.


> It literally refers to a giant, ultra-secure vault that will have to be maintained individually by different phone manufacturers

We can't even trust manufacturers to provide updates in most cases. Placing that much trust in them is nothing short of lunacy.


I don't see anything new in the alleged proposal, this is the same old crypto war. This is "just" key escrow.

One might as well propose to have the manufacturers build in the governments public key (and autobrick phone usage) such that the phone can detect if it is really the government reading the phone.

Another note:

"Ozzie’s proposal relies fundamentally on the ability of manufacturers to secure massive amounts of extremely valuable key material against the strongest and most resourceful attackers on the planet. "

This is not true: the phone encrypts the users passcode against the manufacturers public key. If the government tries to read the phone, it will get the encrypted passcode (useless) and send it to the manufacturer who decrypts the passcode. A single private key is not massive amounts of information. Not that it changes anything about protection needs: wheither its a piece of paper containing the say 4096 bits (512 bytes), or in Matthew Greens misinterpretation billions of 512 bytes (half a terrabyte) on a single HDD, they both have the same value. The whole code base needs similar protection anyway: their bootloaders already are signed by the manufacturer.

All this centralization is bad, leave the crypto genie out of the bottle please...


I do prefer the idea of storing it on paper... at least it's a little easier to lock up. Even a big camera will only take a few thousand pictures before it fills up, and physical access is a lot easier to enforce.

If we make 2 billion phones a year (Apple itself is just over 200M) and you have a line printer running full blast (66 lines = 1page per sec) you could do Apple with one printer... and the world in 10. It would be a lot of boxes of paper though... about a box an hour.

edit: to be clear I was assuming that almost every dot in the matrix was a valid bit and there were 66 keys per page... 80 or even 132 columns at 7x5 wouldn't be enough for 4096 bits otherwise.


impressive calculation for the per phone case!

but as I wrote, its not necessary in Ozzie's scheme: Apple only needs to store the single private key. All the phones contain the same public key corresponding to it. All phones encrypt the user passcode to the same public key. When a user tries to unlock his own phone with his correct passcode, the phone encrypts his passcode and arrives at thee same encrypted key, unlocking the phone. When the government seizes the phone, with a special device they have the phone show the encrypted pass code, dump the encrypted GB's of encrypted phone contents, and burn an irreversible efuse in the processor disabling it. They send the encrypted passcode to Apple, who verifies its the government indeed. Apple uses its single private key to decrypt the user passcode. Apple sends this pass code to the government. The government can decrypt the image.

In the proposal there is no need for a massive database of key material. It's nonsense.

(in practice Apple would use treshold crypography, so that at least k out of n private keys each belonging to specially trained and screened employees are necessary to decrypt)

(in practice each phone has a hardcoded random nonce in efuses and instead of encrypting the user passcode, it encrypts [passcode+nonce], otherwise the government could just bruteforce 10^4 encryptions to the public key)

I am only saying that this can be done efficiently, not saying that I agree with the desirability of key escrow. This idea of key escrow is as old as cryptography.


Totally agree that they only _need_ a single secure key and a BUNCH of insecure nonces. However, if I was forced to keep a key in escrow and wanted it to be secure I'd put a uniquely generated (lots of lava lamps?) key for every one on paper and force anyone who wanted to look them up to do it physically, in person, with paper. Out go the digital public keys, in stay the paper private keys in a well observed building full of a zillion boxes of paper. Most insecure part is still the key generator.

If the feds want to audit, they can... but everyone will see it on video, what boxes they opened, and what pages they (could have) looked at.

I'd hire some magicians for pen testing too.

edit: pi*10^7 sec in a year is a useful approximation


1) regarding BUNCH: the nonces are random and hence there will be as many as there are phones drawn from suitably large n-bit space, but Apple does not need a local copy of the nonces, if the government requests a decryption and Apple agrees, it will decrypt and find the user pass code and the irrelevant nonce.

2) "and force anyone who wanted to look them up to do it physically, in person, with paper" I dont understand your proposal? If the government wants to decrypt a phone, they should come to Apple in person with what paper? How do you insure that everyone knows when a phone is audited? (Ozzie's proposal or what we read of it in the article does not adress insuring the populace finds out whenever a phone is decrypted). In your scenario the well observed building is operated by Apple or by the populace?


The idea is that each of the secret keys (not nonces) would be kept on paper in a well observed location, with only the public keys leaving. The building would be operated by whoever is responsible for generating the keys and showing that their keys aren't (hopefully yet) compromised. They could allow the public to observe and perhaps the boxes could be marked with the range of IMEIs/keys contained within. If the cops want to go in and get a key out of a box, they can get a warrant to do it but everyone can know which few hundred thousand phones have been compromised.

It doesn't completely prevent malfeasance... it just makes it a PITA.


suppose Apple owns the building:

* there is no advantage in having cops come over: either a secret is revealed or it is not. Any information to convince Apple concerning a specific case or phone could just as well be sent over the internet. Allowing them to enter looks like a serious threat vector to me, they could plant things, smuggle things out...

* Either Apple is faithfully reporting each count of the cops unlocking a phone, or it isnt. In the case of requesting over the internet the cops can't bring in devices to look through closed boxes or whatever.

* Is your fear rooted in a perceived sense of insecurity because of the small passcode (4 decimal digits) and the effect that would have on the security of the encrypted(passcode+nonce)? because that is exactly why the random nonce is there, in theory the user could select his own nonce and have it burned in efuse memory, but he would only be able to change the nonce a limited number of times. Then the user can roll as many dice as he wants and xor bits to smithereens ;)

but it all stays crap key escrow, its just a big "Eureka!"-show trial balloon to gauge public acceptance, no?


The real risk is that whatever the key-holder thinks is air-gapped storage isn't and the whole lot is secretly lost to crackers, state sponsored or not... that's a lot harder to do with 1000 tons of paper.

The point is that even a dedicated party trying to keep the keys safe probably can't do it (for any length of time) on digital media.


The main difference between this proposal and the previous ones is the bricking step, which is supposed to make it transparent when the key has been revealed. But once the key has been revealed, what prevents an attacker from replacing the main board of the phone (keeping the phone's exterior and its SIM card), and copying all the data to the new board? A non-technical user (and even most technical users) wouldn't know the difference.


Bricking the phone works against law enforcement by only allowing raw access to the data. Even if Clear worked correctly, law enforcement couldn't open apps and see the data in the correct context. They'd have raw data files full of indexes, hashes, and cached data. Worse, apps would start to encrypt data on the client specifically to avoid Clear.

The only significant change between plain key escrow and Clear (bricking the phone) would defeat the usefulness of Clear.


Apps could potentially work in read-only mode. Plus it's pretty easy to design a tool to pretty print iMessages given raw data, and that alone would be very useful for law enforcement.


> [..] and keep the secret key in a “vault” where hopefully it will never be needed.

That's only in bullet point one and where it already falls apart.


I cant think of anything worse than my plastic metal and glass friend being forced to snitch against me. Its like my best friend betrayed me. Beyond creepy, key escrow proposals are the very definition of totalitarianism.


Can anyone explain why the government wouldn't just mandate that they be given all the keys from the start? Why would they put up with Apple as a middleman who could potentially refuse their requests?

Also, this key escrow scheme is near impossible to scale to more than one government. Now we need a way to authenticate government agents, good luck with that.


The government would be a single point of failure so it's cheaper and more secure to privatize. Also, private control of keys acts as a check upon government abuse.


But why? Why give the government such a ripe target for abuse? Why tilt the balance of power even further in its favor?


Also a good question, but this post focused only on why it's a stupid idea technically on purpose.


Point taken. He briefly talks about “policy” in the beginning. Policy is the instrument through which the ruling class gradually strips away individual freedoms. We need to prominently feature freedom in these conversations, because ultimately that’s what at stake here.


Many people, especially those outside the tech community, do not view law enforcement as an adversary. In the US, the balance that we have struck is that the government cannot search our property, except upon probable cause (fourth amendment). While I personally don't like it, I think that warrant-based key escrow is reasonable from a policy perspective.


In post-FISA world where a campaign of a presidential candidate was wiretapped under false pretenses, this view of the world is criminally naive.


Because every organization strives for more power, whether its members admit it or not.

The very idea of "checks and balances" is that different organizations would strive for power on opposite directions, thus preventing each other from gaining much.


Because now there is a patent, the government can start forcing companies to implement it. And the patent owner will profit. Quite smart, actually.


Except prior art?

I mean, clearly there’s a difference between “blows up” and “destroys all the key material”, but clearly Apple can point to prior art here.


If you'd like to look at the proposal yourself rather than to interpret it through others:

https://github.com/rayozzie/clear/blob/master/clear-rozzie.p...


I’m not sure the benefit to Apple or other phone manufacturers. This looks like a substantial cost with zero benefit to those others than law enforcement. And substantial new risk for misuse or abuse.

What’s Ozzie’s true motivation? Is he looking to start a company running Clear and raking in patent revenue? I get why the governments want this, but not why a citizen would propose this.

If it weren’t Ray Ozzie, I would think this was just part of some propaganda push.


> I’m not sure the benefit to Apple or other phone manufacturers. This looks like a substantial cost with zero benefit to those others than law enforcement.

The benefit is that law enforcement has access to relevant information. Society has a vested interest in this provided it doesn't infringe any other rights. It's why warrants exist. If you have the ability to respect a warrant without hurting your customers, it should be illegal not to do so.

Obviously there are significant technical issues, which is why this is contentious; those are outside the scope of this comment.


Money. His true motivation is money. Secondary to that is prestige; he "solved" this problem.


You might be right, but you’re guessing. I’m pretty pro-crypto. I’m an anarchist. I am generally oriented towards non-governmental solutions to everything, including violence.

But even me, even with that bias, I still worry quite often about what evil can lurk behind cryptographic structures, and what effect wide availability of strong crypto will have on that.

I don’t know that it will be positive or negative... my gut says positive, but I worry. And so it’s not crazy to me to think Ozzie might be legitimately worried about people’s safety.


Do you personally know him or is this just speculation ?

Because most entrepreneurs aren't running businesses primarily for the money.


Just a nitpick. Matthew Green uses the analogy of signing keys being leaked often as evidence that Ozzi’s proposed system would be similarly not secure. This is a weak analogy: signing private keys are often leaked because their use case requires them to be “online” in some fashion (code must be signed with the private key so it can be verified with the public key). Similarly, CAs must use private keys operationally (to sign customer CSRs), increasing the risk of key compromise.

In Ozzi’s proposal, the private key never actually has to exist outside the environment it was created in, only the public key does. As pointed out in other comments, LE would not need access to the private key, either, they could simply submit the encrypted passcode to the manufacturer, who would then decrypt it on their behalf using the private key.


Code signing and decryption both require access to the private key, possibly through a hardware security module. I don't see why decryption has less exposure.


Extremely exceptional access only, in cases where thousands of people's lives could be at stake or millions. Since we can't create a fully unbreakable software/hardware security systems anyway, if ever, companies can use technology + psychology. Unintentionally create an extremely difficult to find bug that requires extremely talented engineers and large hardware resources to break, then unintentionally share it with at most discreet way probably just verbally to very few trusted 3rd parties. And it is not officially approved by the top management or even knows about it. We don't live in a perfect world and we don't have a perfect solution. JUST COMMENTS, NOT A SUGGESTION!


> Extremely exceptional access only, in cases where thousands of people's lives could be at stake or millions.

And how do we determine when that's actually the case and when it's overhyped or flawed intelligence?

> We don't live in a perfect world and we don't have a perfect solution.

Exactly, so focusing on phone encryption is probably a waste of time.


I don't want this scheme. I don't want key escrow. But, a critique in the document is a 'if lost, lost forever' moment. If the escrow DB is compromised, the article says all phone are now pwned. For that point in time, true.

But phones are online devices. why does the escrow key have to be a constant, which if the central store is compromised means all phones prior to that date are compromised forever?

eg, re=spin the per-phone keygen on some cycle, and you define a window of risk, but it passes. re-spin clearly has to pass through some protocol, but we've been doing ephemeral re-key forever with websites.


He talks about “massive amounts of extremely valuable key material“ needed to be stored for billions of devices.

It’s not like this would be Fort Knox. All that data could be stored on a couple of USB sticks which, really, makes it even scarier. Someone could hold the entire contents in the palm of their hand walk away with everything.


The article makes exactly that point:

> If ever a single attacker gains access to that vault and is able to extract, at most, a few gigabytes of data (around the size of an iTunes movie), then the attackers will gain unencrypted access to every device in the world. Even better: if the attackers can do this surreptitiously, you’ll never know they did it.


What if someday we get political leadership so awful that, hypothetically, a former CIA chief feels compelled to warn that is is fundamentally dangerous to the nation?

One answer might be that we deserve such an outcome, and there is no reason to insulate encryption from the negative consequences. But is that a good answer?


> Also, when a phone is unlocked with Clear, a special chip inside the phone blows itself up.

no thanks


Assuming “blows itself up” means it’s bricked rather than “does a Samsung”, I’m ok with that. As the article explains, it’s the only form of intrusion detection in the whole thing


Personally I believe real world actions should be the focus of surveillance. The empires are simply trying to cheap out by focusing on surveillance of computer activity.

This is the most profound part of Matthew Green's piece in my opinion:

"While this mainly concludes my notes about on Ozzie’s proposal, I want to conclude this post with a side note, a response to something I routinely hear from folks in the law enforcement community. This is the criticism that cryptographers are a bunch of naysayers who aren’t trying to solve “one of the most fundamental problems of our time”, and are instead just rejecting the problem with lazy claims that it “can’t work”. "

I believe the most fundamental problem is how can we decentralize real world security? I am FOR mass surveillance but AGAINST centralized mass surveillance.

Assume every crook and cranny of the world was covered by community cameras, and the cameras encrypted the streams with treshold cryptography, such that the populace has different parts of the secret, then one needs "enough" citizens agreeing to reveal the contents seen by a specific camera at a specific time. This way its public for all or public for none. Every accident, every murder, ...

Suppose a body is found, then the group decides to reveal the imagery: oh yes, in this case the person was murdered! look the perpetrator is walking out of view to the next camera, then the next,... we can trace him to where he is now. Properly trained citizens (in a now authorized police role) go and arrest the guy. He is now in prison waiting for his trial (also with community cameras, so no broomsticks in prisoner ani). At trial time, if the person denies, or claims to be a different person from the arrested one, we can trace through all the imagery from his commiting a crime to his sitting in court right there and then.

So yes, there is a real conflict between cryptographers and centralized law enforcement. We dont need no spooks!

And the spooks can not decode the camera imagery: a large enough number of citizens (chosen at random by cryptographic sortition) running instance of good citizen client software need to release their part of the shared secret.

EDIT:

So there is broadly speaking 2 kinds of crimes:

* meatspace crimes (murder, negligence, rape, making childporn (automatically rape), ...)

* cyber crimes (copyright, child porn, ...)

I argue that not implementing such a community camera system is a form of negligence in itself.

It does not adress things like copyright infringement, but ... thats not exactly the most popularly supported concept.

Then there is the problem of child porn: fake and real.

I argue that with deepfake any faked child porn will eventually become indiscernible from real child porn.

Which leaves the problem of official child porn recorded by the community cameras used to apprehend perpetrators (since these also sign the imagery to testify authenticity!).

Due too taboo many victims of child abuse didn't realize, or only had doubts that they were suffering abuse, enabling the abuse to continue. Without concrete visual examples for them to explore, to asses if they are or are not suffering child abuse, how can they alert others of their situation? We send these children extremely mixed messages: absolutely tell us if you are being abused, but absolutely never falsely report a person. Merely asking someone else for advice is automatically interpreted as a child reporting child abuse. How can a child asses his or her situation? With abstract questions using words and connotations it does not know?

I believe the number of reported child abuses would go up if we used these community cameras for decentralized mass surveillance.

Also for crime in general (theft, murder, ...), the knowledge that you will with extremely high probability be caught, will decrease a lot of crime. I would not be surprised if the crime rate of "impulsive" crimes (where the criminal was supposedly not able to control his urges) would drop substantially, revealing that in the current system they often get off the hook.

There will still be rude people, getting fines for squeezing women in the ass while drunk. But for any actual crime in general, both victim and perpetrator would know that the victim can simply report this to the group, and that the perpetrator can not escape by lack of evidence. The current lack of evidence constantly discourages people from reporting crimes (as there is risk involved: financial: lawyers, emotional: potential incredulity at police station, ...).

One might think that this will cause criminals to escalate to murder: "if you rob a victim, you should kill her, or else she will report you" but hiding a body will be very hard, and if a person goes missing the friends and relatives will report this, and instead of following the criminal we can follow the missing person from the time and place she was last reported seen!

As long as cryptographers only draw the privacy card, the law enforcement community has a point. As long as the law enforcement community only draws the centralized power card, the cryptographers have a point.

Only when we have decentralized mass surveillance can we have both privacy (as long as you don't commit crimes or go missing) and real law enforcement.

Common FAQ:

What if say a stalker repeatedly reports his ex as "missing"? Cry wolf to many times, or be blocked to report a person missing, and the good citizen client software that the citizens individually run, will refuse to comply.

What if a stalker or group of them repeatedly reports a "murderer" in a celebrities bedroom? we can send a local but randomly selected properly trained (group of) citizen (in police role) to go check the room, if the supposed dead body is not there, no reason to unlock the imagery.

(I will add more as people ask)


Regarding your distinction between real and cyber crimes, digital evidence can certainly be relevant in a murder case, e.g. iMessages, location history, search history. Also, the read-only bricking chip tries to allow search but exclude ongoing surveillance, though I don't think it's technically feasible.


"Regarding your distinction between real and cyber crimes, digital evidence can certainly be relevant in a murder case, e.g. iMessages, location history, search history."

But the cameras are supposed to completely cover society, so we don't need the cyber info. Indeed, perhaps the perpetrator has a secret paper diary, written in code, where he writes down his exploits. Who cares? We have signed imagery, of him commiting the crime. Any extra information is useful in the statistical sense (to understand what drives a person to do this or that, or to better prepare citizens on how to prevent falling victim to such and such crime), but should be unnecessary to convict a person. The most relevant are the actions themselves I think.

About location history: the camera system is more reliable than the cell phones since a cell phone may be given to a friend willing to provide an alibi, alternatively GPS spoofing etc.

The major reason these cell phone messages, search history etc are highly relevant is simply because we lack the community camera system.

Another problem is phone evidence is highly irregular: some people are more aware of mass surveillance then others (which is also highly correlated to status in society!) when communicating, some people refuse to have a cell phone on them, ...

When they lack enough evidence, the prosecution is forced to grab at straws (irrespective of guilt or innocence of the defendant), and then the value of computer/phone activity seems very high, especially if boots on the ground or scientifiic investigation of crime scenes is so much more expensive. Then it is easy to view this digital data as highly relevant and reliable.


> * cyber crimes ([...], child porn, ...)

I find that a disturbing classification.


Perhaps a miscommunication?

Suppose we talk about vehicles, and I could classify them by color (red, green, ...), or by type (cars, planes, ...)

What is a good or bad classification?

What is disturbing you? the mere topic?

We can't improve prevention of a problem without talking about the problem.

EDIT:

Note, I have added making child porn to meat space crime (even though that is obvious) specifically for you

Would you consider the "Napalm Girl" [0] to be child porn? Or evidence of the atrocities of the use of napalm in the Vietnam war? Did it eventually contribute to the end of public support for the war?

https://en.wikipedia.org/wiki/Phan_Thi_Kim_Phuc


> What is disturbing you? the mere topic?

The fact that you seem to consider the act of looking at the resulting pictures to be the primary crime even though that is more or less victim-less, rather than the sexual abuse that children are subjected to in order to produce the pictures, which very obviously is not a "cyber" anything, while you at the same time put murder into the category of "meatspace crime", if, following the same logic, it would also belong into the "cyber crime" category, because there can also exist pictures of the act of killing a person that people could look at.

This seems to be an expression of the focus on the vilification of a sexual orientation that people have no control over, namely pedophilia, while almost ignoring the thing that actually hurts other people, namely the actual sexual abuse of children.


Yes I see what you mean, but that was never the intended message, note the ellipsis!

So just to be clear, I think it is obvious you, me and nearly everybody regards the making of childporn as a crime, without making any statement on how we regard the crimeness of watching child porn.


Well, I believe you that you didn't intend that message. But I think the way you put it still reveals how you think/thought about it, and you are almost certainly not alone with that, that seems to be an expression of a sort-of societal consensus--which is actually why I found it disturbing. Noone would ever even get the idea of putting murder into the "cyber crime" category, but somehow that seems to be a natural thing to do for sexual child abuse for many people.


The real proble is that I classified subjects, instead of counts of crimes:

It should read:

* meatspace (a count of murder, a count of rape, ...)

* cyberspace (a count of distributing murder films, a count of distributing rape films, ...)

The reason nobody thinks of "actual" snuff murder films in cybercrime category, is because we are heavily bombarded with images of people dying: soldiers getting shot in the news are not considered snuff, cops and robbers shooting each other in movies are not considered snuff. But naked children are not that regular in news or movies (perhaps french/european movies, but thats just nudity in general not specifically children).


Nah, that's just another effect of the same underlying cause:

Sexuality is something that religions have very restrictive rules about, and anything that doesn't fit within the bounds of those rules is considered immoral. Which is why homosexuals were and still are oppressed, and which is also why pedophiles are oppressed essentially just like homosexuals once were. Being a pedophile is widely considered to be a moral failing, just like homosexuality once was (and by some still is). This taboo leads to people not rationally looking at the actual facts of the situation, not considering the actual consequences of what is happening, instead anything that is in any way associated with pedophilia is seen as equally objectionable, which in turn leads to a general lack of distinction.

"Child porn" is just a generic label for "terrible immoral stuff done by immoral people involving the idea of sexual practices involving children" in the public discourse, and a terrible label at that, given that it either tries to use a negative view of pornography to evoke an emotional reaction to an essentially unrelated topic, or trivializes the experience of abuse victims whose agony is documented by suggesting that it's in some way similar to consenting adults producing erotic works for the pleasure of other consenting adults to look at. It's a bit like using the label "child horror movies" for snuff videos depicting children.


EDIT:

Would you consider the "Napalm Girl" [0] to be child porn? Or evidence of the atrocities of the use of napalm in the Vietnam war? Did it eventually contribute to the end of public support for the war?

https://en.wikipedia.org/wiki/Phan_Thi_Kim_Phuc


Really, I don't think it's sensible to even frame it that way.

Should she have the right to control where this picture of her naked is being used? Yes.

Is the picture evidence of sexual abuse? No.

Is the picture evidence of other human right abuses? Yes.

Why should it matter whether it "is child porn"?


"Should she have the right to control where this picture of her naked is being used? Yes."

This is the hard question, I argue the group should not give her the right to censor this image.

Imagine she had the power to censor that image, then she might be bribed into censoring it. I think taxpayers have a right to know what happens with their money.

I think the next generations have the right to know what happened, and learn from the mistakes of the generations before.

Imagine such an image of a person abusing a child, encrypted and signed by community cameras and later decrypted to apprehend the perpetrator. I still think a lot of valuable information (for prevention) can be found: did the perpetrator make some kind of promises? or use fear? how exactly do they lure a child into that situation? how can we prepare children better to recognize such situations? etc...

Then there is also political activism, to the extent that conspiracies or power abuses arise, if the only way to politically imprison a person undetected is to prevent the "evidence" to be presented to the public, then we provide them with this loophole of censorship...


Imagine we could all simply watch who how and when the double spy & daughter in Britain were possibly poisoned if so.

If they were, we can try to catch them if they are still on soil, or else publicly verifiably convince people elsewhere of what happened, and then continue with higher priorities like Grenfell Tower and how to prevent such events (where many more actually died, and never even worked for the Russians, never consented to the higher risk job of being a spy)

If it didn't happen, we can again focus on our real problems.

It would also generate unity within the community of citizens, instead of this constant contrarianism and doubt.


> It would also generate unity within the community of citizens, instead of this constant contrarianism and doubt.

Yes, it would. Via oppression. Humanity has tried that lots of times. I doubt you would want to live in any of those societies.


When has humanity ever used community cameras with treshold cryptography to make a provable society?

I do not propose to eliminate critical thought, I propose to eliminate the needles diversion of baseless claim and baseless counterclaim. Exactly such that critical thought is freed up to think about other problems!

We like to proud ourselves on having a system where everyone is innocent until proven guilty, but in practice innocent people are regularly convicted and guilty people regularly declared not guilty. There exists no correct a priori stance of being harsher or less harsh on false positives vs false negatives. The only way to eliminate both is relevance: gather more faithfull evidence. I believe decentralized mass surveillance to be the way out.

By removing the need to criticize questions of fact in the domain of meat space human interactions, we free up attention and critical thought to consult our feelings (ethics) about proposals and governance of our society!

I consider this the opposite of opppresion.


> This is the hard question, I argue the group should not give her the right to censor this image.

Sacrificing people's dignity for the supposed common good it very much not a road I am willing to go down. If we are willing to sacrifice an individual's dignity, there is nothing left worth sacrificing it for.

Mind you that this is distinct from limited use for the purposes of prosecution.

> Imagine she had the power to censor that image, then she might be bribed into censoring it.

Yeah, and then imagine I could strip you of your human rights because I value a common good higher than your human rights. Also, no matter what the law, you cannot completely prevent that people will try to put pressure on other people. The solution is not to disregard the individual's dignity, but to create incentives that counter attempts to silence those who suffer. Create an environment where people will come to the conclusion that allowing their picture to be used is the thing they should do, all things considered.

As for your idea of "democratized mass surveillance", I guess it's a nice thought experiment, but not something that could realistically be implemented in a way that would actually be fundamentally better than what we have today. For one, the checks and balances we have today are a sort-of implementation of the same fundamental idea of decentralizing power, but also, total democratization is not a guarantee for humanistic outcomes: Witch hunts, oppression of minorities, and genocides are in a way highly democratic. The problem with the oppression of homosexuals, say, was not that the abuses were a secret, but that the majority opinion was that it was the right thing to do, and such mass surveillance would only have helped with it.


First, I wish to thank you for keeping this discussion alive, I really value the opinions of others on this idea (which is also the result of refining earlier formulations by the opinions of others)

"Sacrificing people's dignity for the supposed common good it very much not a road I am willing to go down."

I don't see how my system sacrifices a person's dignity? Could you describe how my proposals would destroy dignity in your view? Here are some definitions of dignity:

* The quality or state of being worthy of esteem or respect.

* Inherent nobility and worth: the dignity of honest labor.

* Poise and self-respect.

Or are you talking about the dignity of people previously working in centralized law enforcement systems who would then appear superfluous/ineffective/archaic/unproductive/dangerous to citizens in my system? Like we view say slave-holders today?

"Yeah, and then imagine I could strip you of your human rights because I value a common good higher than your human rights."

I don't see how you could do that to me in my system. Or rather, you could, but you couldn't get away with it. Either you let me go at some point and I report the time and place of the events, so you get caught and do prison time. Or you don't let me go or kill me and my friends and family report me as missing, at which point I get tracked down and they find us alive, or me dead, and then later you get caught and do prison time.

"Also, no matter what the law, you cannot completely prevent that people will try to put pressure on other people."

I am not trying to prevent pressure in general, i.e. price communication is pressure too, I am specifically trying to design a system such that illegal pressure can be provably adressed. Say you hold a knife to my throat and pressure me to give my wallet, I can then report and prove that.

"For one, the checks and balances we have today are a sort-of implementation of the same fundamental idea of decentralizing power"

I totally disagree, my goal is a provably law enforcing society, not a trusted law enforcing society.

This provable/verifiable decentralized mass surveillance I proposee stands to the current trusted law enforcement systems in an analogous way as the concept of provable/verifiable cryptocurrencies stand to the current trusted financial system.

I also saw you used the phrase "democratized mass surveillance": note I systematically used the word decentralized, not democratized, there is a difference there, for example we don't directly vote on a trial, nor on wheither or not to decrypt imagery when something is reported, democracy is for the legislative branch, this proposal describes a decentralized executive branch or law enforcement (treshold crypto + cameras + client software + properly trained citizens in occasional police role)


I've been thinking along somwhat similar lines. Here's an old thing I wrote about it. I'd be curious to know what you think?

"Total Surveillance is the Perfection of Democracy"

For once I disagree with RMS, re: https://www.gnu.org/philosophy/surveillance-vs-democracy.htm...

I believe that it is fundamentally not possible to "roll back" the degree of surveillance in our [global] society in an effective way. Our technology is already converging to a near-total degree of surveillance all on its own. The article itself gives many examples. The end limit will be Vinge's "locator dust" or perhaps something even more ubiquitous and ephemeral. RMS advocates several "band-aid" fixes but seems to miss the logical structure of the paradox of inescapable total surveillance.

Let me attempt to illustrate this paradox. Take this quote from the article:

    "If whistleblowers don't dare reveal crimes and lies, we lose the last shred of effective control over our government and institutions."
(First of all we should reject the underlying premise that "our government and institutions" are only held in check by the fear of the discovery of their "crimes and lies". We can, and should, and must, hold ourselves and our government to a standard of not committing crimes, not telling lies. It is this Procrustean bed of good character that our technology is binding us to, not some dystopian nightmare.)

Certainly the criminally-minded who have inveigled their way into the halls of power should not be permitted to sleep peacefully at night, without concern for discovery. But why assume that ubiquitous surveillance would not touch them? Why would the sensor/processor nets and deep analysis not be useful, and used, for detecting and combating treachery? What "crimes and lies" would be revealed by a whistleblower that would not show up on the intel-feeds?

Or this quote:

    "Everyone must be free to post photos and video recordings occasionally, but the systematic accumulation of such data on the Internet must be limited."
How will this limiting be done? What authority will decide who gets to collect (archive!) what and when? And won't this authority need to see the actions of the accumulators to be able to decide whether they are following the rules?

In effect, doesn't this idea imply some sort of ubiquitous surveillance system to ensure that people are obeying the rules for preventing a ubiquitous surveillance system?

Let's say we set up some rules like the ones RMS is advocating, how do we determine that everyone is following those rules? After all, there is a very good incentive for trying to get a privileged position vis-a-vis these rules. Whoever has the inside edge, whether official spooks, enemy agents, or just criminals, gains an enormous competitive advantage over everyone else.

Someone is going to have that edge, because it's a technological thing, you can't make it go away simply because you don't like it. If the "good guys" tie their own hands (by handicapping their surveillance networks) then we are just handing control to the people who are willing to do what it takes to take it.

You can't unilaterally declare that we (all humanity) will use the kid-friendly "lite" version of the surveillance network because we cannot be sure that everyone is playing by those rules unless we have a "full" version of the surveillance network to check up on everybody!

We can't (I believe) prevent total surveillance but we can certainly control how the data are used, and we can certainly set up systems that allow the data to be used without being abused. The system must be recursive. Whatever form the system takes, it shall necessarily have to be able to detect and correct its own self-abuses.

Total surveillance is the perfection of democracy, not its antithesis.

The true horror of technological omniscience is that it shall force us for once to live according to our own rules. For the first time in history we shall have to do without hypocrisy and privilege. The new equilibrium will not involve tilting at the windmills of ubiquitous sensors and processing power but rather learning what explicit rules we can actually live by, finding, in effect, the real shape of human society.


My proposal stems very much from nearly identical thoughts that you just described!

Just posting to say I have read your comment, and will most certainly edit this comment to reply tomorrow!

I will probably also want to be able to contact you (by some method acceptable for us both, email? IRC?) if I ever rewrite this in a more accessible format, or perhaps to collaborate on this subject?


Cool. :-)

eff oh arr em ay en dot ess aye em oh en at

gmail.com


We need a new way of thinking about caches of secrets. It comes from this unpleasant truth: all secrets eventually leak. The evidence of the past few years teaches us that even state actors with unlimited resources cannot prevent their secrets from leaking.

A "leak" here happens when a trusted entity loses control of the secret to one or more untrusted and malicious entities. That's just a definition, not a claim that any particular government, company, or person is a trusted entity.

To counter this, we need multiple layers of defense.

One is the business of bricking the phones when the leaked secrets are exploited. That makes it plain that the secret has leaked. It's a valuable layer of defense.

Another is to make the secrets have limited useful lifetimes. Expiration and revocation for TLS certificates is a way to do that. Credit/debit card numbers can be deactivated and replaced rapidly. That's another way to limit the lifetime of a secret. Ozzie's proposal does not include a way to limit secrets' lifetimes. (Social Security numbers are problematic secrets: they too have unlimited lifetimes.)

A third layer is making the secrets have limited utility. If debit cards had daily spending limits, their secret numbers would be less useful than they are today, for example. Day-one exploits are secrets with vast utility, for another example. Ozzie proposes a secret to unlock an entire phone. How about limiting that to, say, the phone's call log or SMS log?

A fourth layer is to keep the caches of secrets as small as possible, so a breach affects as few people as possible. Ozzie proposes the opposite of this.

A fifth layer: holders of caches of secrets must know they are strictly liable for breaches proportional to the damage they do. It must not matter whether the breach was due to negligence, carelessness, espionage, or salt water rusting out the safe after a storm. Large scale key escrow cache systems will never be able to meet this standard: nation states won't honor that liability, nor will they pay private companies enough to cover the insurance for it.

(Strict liability is not unprecedented: workers' compensation and the vaccine injury victims' compensation fund are two reasonably successful examples.)

People, companies, and governments holding secrets necessarily must consider what happens when (not if) they leak, and provide at least some defenses in depth like these.

Ozzie's proposal has weak and incomplete in-depth defenses. That's why it's dangerous.


What if the proposal is changed so that the keys do not need to be kept forever? What if Apple is allowed (or even required) to publicize the private key(s) after, say, 10 years?


[flagged]


CAs aren’t exactly as bulletproof as you seem to think they are, and mess things up quite often, despite playing an easier game.

Easier because a properly run CA doesn’t hold the private keys for all certificates it has ever issued, so key theft allows you to sign new certs and impersonate people, but it doesn’t allow you to passively eavesdrop on encrypted communications, or decrypt old messages — forward secrecy holds.

Easier because there is a trust structure where certificates can be revoked. We can revoke wrongly issued certificates and the compromised signing keys, and we have contained the damage. Compare and contrast this proposal, where there is no mechanism to secure phones in the wild once their keys leak.


Robert Graham had a rebuttal, too, and he mentions how Let's Encrypt for instance is quite vulnerable to sophisticated attackers employing DNS spoofing attacks against it:

https://blog.erratasec.com/2018/04/no-ray-ozzie-hasnt-solved...

Also, I think most would agree that CAs are quite terrible at security. It's part of the reason why everyone thinks the CA system sucks and is not trustworthy and it's also why Google has been pushing for Certificate Transparency to be able to verify if the CAs (or people hacking the CAs) are misbehaving. But it's still a patch on a broken system.

And look what's been happening with Symantec, too, the largest CA. They were terrible at managing the infrastructure in a secure manner. There are thousands of different CAs. Most are probably insecure or hacked, we just haven't learned about it yet.

Also, if you read Green's post, he mentions how this wouldn't be like the CA/software signing system at all. In those systems, you can revoke the certificate. If someone gets the master key for 500 million iPhones, those phones will either need to be bricked or they will remain vulnerable forever.

If the NSA/FBI would learn that the system was compromised, they'd never tell us about it either, because they'd know people would then be pushing for the removal of the system, and they wouldn't want that after fighting for so long for something like it.

I think Ozzie is being very irresponsible pushing for this. And the reason he is being this irresponsible is because he has a patent on it (Green links to it in the post), so he'd get royalty for billions and billions of phones for the next 20 years if implemented. That's really his main goal here. Ozzie wants all the credit and money for it, but I bet he's not going to take any liability for when the system is inevitably broken.


CAs seem to be able to keep their private keys safe.

You mean, except for the 23 000 that were leaked just this year?

https://arstechnica.com/information-technology/2018/03/23000...


>That dynamic is undemocratic. Whatever you may think about the US government, it is still far more legitimate to make these decisions than some guy in a basement and the CXOs of Google and Facebook.

For one, it's not the CXOs of Google and Facebook that cause the disruption in what's decided politically. If anything those are totally in bed with the government, and the US government at that (e.g. not the government of the rest 96% of the planet, who are still their customers).

It's the average Joes and Janes that adopt said technology as a means to circumvent the govermental dictums. And that perhaps not for Bitcoin and Tor (which, in the end, are both fringe activities, the 99% of the population doesn't participate in), but e.g. for things like DRM (tons of people use cracked DRMed content and stuff).

Second, there's nothing much "democratic" in the overeach of government in all areas of social life just because they can.

If anything, your argument can be inverted completely:

It's the government that loves the idea of increasing its reach beyond any previously accepted limits and loves using technology to achieve that.

Whatever privacy breaks or surveillance the government does for example wasn't possible, and wasn't even considered to be mentioned in the law, pro or against, 10 or 40 years ago, before technology gave them the opportinity.

So it's the government who loves to "re-interpret" the law to extend its powers through technology (and does that at a massive scale) and not the "tech community who loves the idea of subverting the law with technology" (they do, but they have done very little in that regard).

P.S Plus, CAs as an example of "safety"? LOL. Because they're so secure, and everyone loves their competence and the CA system, right?


This is a really nutty comment.

Symantec’s CA is publicly collapsing in slow motion right now, because they neglected their security.

https://security.googleblog.com/2018/03/distrust-of-symantec...

Honestly... your comment feels a bit like observing that terrorism is not a problem a month after 9/11.

CAs screw up, and when they do, it is extraordinarily inconvenient.


Tor was created and released to the public by the United States Naval Research Laboratory; it's an example of the US government giving people a useful tool, not of technologists subverting the US government.

https://www.nrl.navy.mil/itd/chacs/dingledine-tor-second-gen...


You haven't addressed the currently imaginary properties required of the secure processor.

CAs also have a revocation story. Fun stuff re-provisioning the keys inside millions of phones.

And then after all that the really bad actors will have illegal phones from China that no one seems to have the private keys for.


I agree about the security of a centralized vault being a key weakness, but the article omits a few key aspects of Ozzie's proposal:[0]

* A court order is required. It's not up to the tech vendor.

* Physical control of the device is required. No remote exploits.

* Access is enabled only to one device at a time. No mass hacking.

The point of security is to increase the cost to the 'attacker' (here we'll use that word even for legitimate government purposes); there's no perfect security; law enforcement can access data on iPhones already. Also, attackers focus on the weakest (i.e., least expensive) link and there's limited value in increasing the cost beyond the 2nd weakest link.[1] Except for the centralization of key storage and two other issues (see below), Ozzie's proposal might increase the cost to the level of law enforcement's alternative, acquiring a hacking tool. In fact, I've been thinking of something similar (court order, physical access required, notification to user) and might even have posted it to HN at some point.

Using hacking tools is much worse than Ozzie's process: There's no court (or at least it's not as enforceable, because there's no tech company checking for a warrant), no tech company, the user doesn't necessarily know their data has been accessed, remote exploits are possible, and so is mass hacking.

Also remember that private citizens can still encrypt their data at the file level using other tools, though of course most will not.

Here are weaknesses I see:

A) The use of other means of accessing devices would have to be outlawed, or law enforcement will continue to use hacking tools and citizens gain nothing.

B) Solve the centralization problem. Probably, the keys shouldn't be in the hands of the tech giants and should be distributed widely. EDIT: Perhaps require two unrelated parties for access?

C) If these new access tools are built into mobile devices, what happens in countries where people's rights have been taken away? The courts are often ineffective. I suppose the fact that the phones get bricked at least informs the user, and the authorities can use hacking tools anyway, so perhaps nothing is lost.

____________

[0] https://www.wired.com/story/crypto-war-clear-encryption/

[1] If I increase the cost of exploit A to $100,000 and exploit B costs $50,000, attackers will use B. If I increase the cost of A even further, to $200,000, it won't provide much more security - the attackers still will use B.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: