Hacker News new | past | comments | ask | show | jobs | submit login
The Horror of a 'Secure Golden Key' (keybase.io)
1025 points by jashkenas on Oct 8, 2014 | hide | past | web | favorite | 230 comments



Schneier 5 days ago:

"Ah, but that's the thing: You can't build a "back door" that only the good guys can walk through. Encryption protects against cybercriminals, industrial competitors, the Chinese secret police and the FBI. You're either vulnerable to eavesdropping by any of them, or you're secure from eavesdropping from all of them.

Back-door access built for the good guys is routinely used by the bad guys. In 2005, some unknown group surreptitiously used the lawful-intercept capabilities built into the Greek cell phone system. The same thing happened in Italy in 2006.

In 2010, Chinese hackers subverted an intercept system Google had put into Gmail to comply with U.S. government surveillance requests. Back doors in our cell phone system are currently being exploited by the FBI and unknown others."

http://www.cnn.com/2014/10/03/opinion/schneier-apple-encrypt...


> Encryption protects against cybercriminals, industrial competitors, the Chinese secret police and the FBI.

And that assumes that FBI will continue to remain "the good guys" from now on. Looking at the institution's past deeds I'm not convinced that will be the case: http://en.wikipedia.org/wiki/Fred_Hampton#Raid_and_assassina... and http://en.wikipedia.org/wiki/Martin_Luther_King,_Jr.#FBI_sur... , just to name a few.


I don't consider them to be "the good guys", now.

The examples you gave and several others, including instances of using agents provocateur to entrap people who only came to their attention because of lawful and peaceful anti-government activities.

There have been numerous cases of informants working at the behest of the federal government being involved in some highly objectionable activities and direct involvement of federal agencies in the suppression of domestic dissidents.


Even so, there are weaker links in the security chain: Carriers (mobile equivalent of ISP's)

Your carrier has access to every URL you visit (http or https) and the entire packets sent to/from them (if http), regardless of the password on your phone.

Those Carriers have been MORE THAN eager to bend over backwards & give up the farm to the feds. Just check Verizon, AT&T's history with NSA/FBI/etc.

So, YES, the article is golden, Schneir is golden, but the fact that weaker links exists should still install vigilance & outrage amongst civil libertarians / freedom lovers.

And, even if there are weaker links in the chain, we should still applaud & support strengthening any/all links, while still vigilant/aware of where the weakest link remains.


They're presumably not MITMing SSL, so they have access to every host you visit, e.g. foo.com, via DNS. On SSL they cannot see the URL - e.g. https://foo.com/secret


They also know the volume of data being moved and the times it occurred, which could be useful information.


Often the byte count is enough to determine the URL. It's also a key component of how BEAST et al are able to extract session keys.


If not DNS, then via Server Name Indication (SNI)

http://en.wikipedia.org/wiki/Server_Name_Indication


You're absolutely correct. However, when you combine host + time + location data, all at once, on your phone, then your carrier indeed has TONS of data to spy on you, regardless of the encryption on the phone itself.

All good points, & my only point was that while we strengthen each link in the chain, we can't assume we're secure while weaker links exist.


Do they have full URLs in https? the hosthame, yes, but the GET /path/to/embarassing/page.html is passed within the encrypted channel. Am I correct?


You are correct. HTTPS is HTTP spoken over a SSL/TLS secure channel. The contents of the request and response are all fully encrypted.

The hostname is a special case, as it's present in the certificate presented by the host, as well as in the client hello if SNI is in use. (Both of these are unencrypted, as they're sent in the process of negotiating the secure channel.)


There is also this research on the IETF standard for "lawful intercept" in routers:

Exploiting Lawful Intercept to Wiretap the Internet

https://www.blackhat.com/presentations/bh-dc-10/Cross_Tom/Bl...


There is a subfield of cryptography called kleptography, which studies the "secure golden keys" talked about.

The DUAL_EC_DRBG is a great example of how "golden keys" can be implemented securely, because the underlying problem is the same one that a lot of elliptic curve cryptosystems rely on.

Only the NSA could predict the output of the generator with the same level of security as ECDH or ECDSA.


> The DUAL_EC_DRBG is a great example of how "golden keys" can be implemented securely, because the underlying problem is the same one that a lot of elliptic curve cryptosystems rely on.

I think it was Schneier who described encryption algorithms as being like a single fence post which is a thousand miles high. The vulnerability is not the encryption algorithm. The attacker is not going to break the encryption, the attacker is going to get access to the key.

The ways attackers might do that have very little to do with cryptography. Espionage, corruption, social engineering, bureaucratic incompetence, completely unrelated vulnerabilities in government networks, etc.


I still remember the SecurID fiasco, where RSA had the "golden keys" to all SecurID tokens, and hackers managed to get a hold of them by hacking RSA.


Obligatory xkcd: http://xkcd.com/538/


So what you're saying is, "golden keys" are no less secure than any other form of encryption.

People are repulsed when you call it a back door or a "golden key," but "secure golden keys" are simply key escrows. Escrows are one of the first things you learn to build after asymmetric crypto.

Escrows are well understood, secure, and generally regarded as a good idea. However, "back doors" are scary, sordid, and insecure?


> So what you're saying is, "golden keys" are no less secure than any other form of encryption.

The encryption is using the usual cryptographic primitives. The problem is that the encryption is not the problem.

> Escrows are well understood, secure, and generally regarded as a good idea.

They are not. Because they're the same thing as back doors.

Using key escrow as a method of e.g. password recovery is a security vulnerability. And at least then you're choosing who you trust and the trusted party doesn't have a nuclear neon target painted on it because the keys in escrow are only for specific parties rather than everybody everywhere.

That's really the main issue. It is manifestly unwise to create a system which, if broken, yields the keys to everything. Because it makes the ROI of breaking it so enormous that the quantity and strength of the attackers you attract will overwhelm any imperfect system, and all systems are imperfect.


> Using key escrow as a method of e.g. password recovery is a security vulnerability.

I have no idea what you mean by "security vulnerability" in this case. Key escrows are the primary way asymmetric crypto is used in practice. An escrow recovers the symmetric key a message was encrypted with and you proceed as normal.

> the trusted party doesn't have a nuclear neon target painted on it

By that logic, government agencies should operate exclusively by typewriter. They don't because crypto primitives are designed to be secure against unrealistically strong adversaries. No crypto paper has ever said, "Our construction is secure in the standard model as long as the ROI is less than epsilon."

You can go down the path of "what if there's another Snowden, and it's a snow day in Kentucky...," but you aren't doing cryptography anymore--you're just developing mild paranoia.


> I have no idea what you mean by "security vulnerability" in this case. Key escrows are the primary way asymmetric crypto is used in practice. An escrow recovers the symmetric key a message was encrypted with and you proceed as normal.

You're using an unusually broad definition of what "key escrow" means.

http://en.wikipedia.org/wiki/Key_escrow

And your definition is covering the scenarios that nobody would call a back door, so I'm not sure what point you're trying to make.

Giving a third party access to your private information is a security vulnerability unless you can trust the third party, and "you can trust the government" is a statement contrary to evidence.

> By that logic, government agencies should operate exclusively by typewriter. They don't because crypto primitives are designed to be secure against unrealistically strong adversaries. No crypto paper has ever said, "Our construction is secure in the standard model as long as the ROI is less than epsilon."

Once again, the crypto is not the issue. Nobody is expecting the attacker to solve the discrete logarithm problem. But breaking the cryptosystem is not the only way to obtain the encryption keys. You have a serious security vulnerability if the expected value of the attacker obtaining the secret key is larger than the amount of money required to bribe the relevant government employee(s). Or if the servers the government keeps the keys on are vulnerable to heartbleed/shellshock/whatever at any point after attackers learn of the vulnerability, regardless of the strength of the escrow cryptosystem.

> You can go down the path of "what if there's another Snowden, and it's a snow day in Kentucky...," but you aren't doing cryptography anymore--you're just developing mild paranoia.

It's not doing cryptography at all. It isn't a cryptographic problem. The strength of the cryptography makes little difference because the cryptography is not the weakest link.


That's an excellent rebuttal and I agree completely, but there is another issue with the golden key concept. If the key was only ever goign to be kept ina secure vault burried under Fort Knox forever, it woudl probably not constitute a significant vulnerability. That's the picture golden key proponent paint when they describe such a system.

The problem is that government agencies have demonstrated very clearly that they will use such a key as often and as liberally as they can get away with. Every time the key is brought out and used, it becomes vulnerable to interception or disclosure to the point where pretending that it is sure to be safe is just completely out of touch with reality.


> If the key was only ever going to be kept in a secure vault buried under Fort Knox forever, it would probably not constitute a significant vulnerability

I think that's the main point when people disagree about this kind of thing. One side says "but theoretically our fort knox vault solution would be secure" and the other side says "What I just heard you say is 'If I sprinkle magic pixie dust around this intentional vulnerability and ignore it then it would be secure' and that's ridiculous, a group of humans would not almost certainly not act securely around that vault because they never have".

Of course the "other side" is right, but people don't think that way naturally so this kind of proposal/argument keeps coming up.


Every crypto system and physical security includes a caveat that states ROI < epsilon.


Schneier's blog explains his stance on this in great detail consistently over many years.

Key escrow is a back door.

Schneier has generally moved focus from designing and breaking cryptography (although he was a key contributor to the Skein SHA3 candidate) to the bigger aspects of risk. And trusting the government or any third party is risk.

You'll like his blog.


This many years after Clipper, you are really claiming key escrow as a legal requirement for doing business in the US is a practical idea? You really don't know how many bad outcomes would have arisen from Clipper?


>Only the NSA could predict the output of the generator with the same level of security as ECDH or ECDSA.

Only the NSA could generate the key initially. After that, who knows who will get access to it? You still have the same fundamental problem as with all backdoors, you've just eliminated one that most of them have (the fact that they're crap enough that you don't even need the back-door key to access them).


IOW, it's not a technical problem, it's a social one.

"The NSA" isn't a person. If "the NSA" has access to a backdoor that means there are possibly hundreds of people who have access to that backdoor, every single one being a mere human being.

It's not a secure golden key in a glass box with the label "in case of emergency break glass". It's a note on a shared whiteboard in an open plan office.


> How do we know, and be sure that this is true?

Predicting the generator's output reduces to solving the discrete logarithm on elliptic curves, which is incredibly hard unless you chose the private key.

Just like in any asymmetric crypto scheme, given only the public key, it's very difficult to find the private key.

(Actually doing it requires knowledge of how the generator works. I promise you it is easy given the private key, but you can go into its internals if you don't believe me.)

> Isn't it just trivial for someone to "recover" your data by pretent to be you by court order, NSL or other means?

Without a "golden key," somebody would have to ask you. With the "golden key," somebody would have to ask the organization in charge OR you.


> The DUAL_EC_DRBG is a great example of how "golden keys" can be implemented securely,

Edward Snowden proved how this is far from secure. There are people from the NSA, who if they wanted to spy on a girlfriend, could walk out with those keys.

The underlying problem is that you're no longer relying on technological security and mathematical certainty, but rather on human security and the capriciousness of people who earn $65,000/year to keep secrets.

There's no security at all there.


Not only NSA, everybody who gets to have a key could predict the output of the generator. In order for a key to be used, and those that promote it always want to use it, there would be enough copies of that key that very soon it would leak.

Your assumption that ""golden keys" can be implemented securely" is wrong. The only secure golden keys are those not used. Which means, in order for such keys to be really secure they mustn't be created.


How real is this issue?

Isn't is true that today, I can easily clone iPhone's data by declaring I loss my phone and need to buy a new one and recover everything from iCloud?

Isn't it just trivial for someone to "recover" your data by pretent to be you by court order, NSL or other means?


I agree! Just listen to "Security Now" popcast from Steve Gibson.

Steve's thought on how safe the data in cloud storage is by asking a question: "How easy is it for me to recover the data from your cloud service if I loss/forgot my password?"

On iDevice, it is NOT that hard for the phone owner to do it. It is trivial for any government agencies fake the owner's phone #, txt Msg, email to recover or reset the iCould password.


>Only the NSA could predict the output of the generator with the same level of security as ECDH or ECDSA.

How do we know, and be sure that this is true?


I don't know why I got downvoted and I love how people don't give the slightest explanation.


There are plenty of things you can do. Your passwords (or whatever authentication method you choose to use) are a "back door" to get access to your device/accounts. What do you do to keep them secure? Your password can also be used be used by the bad guys - what do you do to reduce the chance of that happening and mitigate damage if it does happen? Now instead of thinking how you protect your own information, come up with ways concerning how you would protect your neighbor's, neighborhood's, city's, etc. and work your way up.

Implementing the "back door" might be a difficult problem, but stating a priori that it's an intractable problem is just lazy thinking.


I'll take the downvotes as indication that I failed in my appeal to HN to discuss potentially workable solutions instead of just dismissing the problem as impossible. I'll go ahead post my own possible solution in hopes of actually furthering discussion. A couple minutes of thought on the subject brings me to:

1) Have the device manufacturer, FBI and Federal Court system all create their own public/private key pairs. FBI and the courts publish their public keys; the device manufacturer includes their public key, the FBI's and the court's on each device.

2) Each of these organizations stores their private key in a their own central secure facility. The key is only to exist in airgapped systems, further encrypted using a secure multi-party scheme so that multiple people are needed any time the private keys are accessed. Include personnel vetting, logging, video cameras, secure vaults, etc. to add further security.

3) When the user selects their password, the storage media is encrypted using that password, and the password is stored in a separate location on the device encrypted using the FBI's public key, the device manufacturer's public key and finally the courts' public key.

4) When law enforcement needs to decrypt the phone, they request a warrant from the court. When the warrant is approved, they send a copy of the encrypted password to the court's secure facility. The court decrypts it using their private key and hands it back.

5) The FBI (or the court, whichever) brings a copy of the warrant and the partially decrypted password to the device manufacturer. If the device manufacturer wishes to contest the warrant, they go through the normal appeals process. If not, they decrypt again using their private key and hands the resulting encrypting password over to the FBI.

6) The FBI takes the resulting encrypted password back to their own secure facility and decrypts it once more using their own private key, giving them the plaintext password. They now have access to the device, and could only have done so with a valid warrant and further given a third party trusted by the suspect (the device manufacturer) a further say in whether or not to appeal the warrant.

There's my quick solution. Feel free to further the discussion by critiquing or improving it rather than downvoting to oblivion.


> There's my quick solution. Feel free to further the discussion by critiquing or improving it rather than downvoting to oblivion.

Rather than critiquing your specific solution, allow me to explain why no solution is possible in general.

0. Because this is Apple, people are assuming Apple's model, but it is not acceptable to enshrine that model into legislation for all of the reasons I could go into if they don't seem sufficiently obvious to anyone. And if Cyanogen on a Nexus device or Debian on a PC doesn't support the backdoor (or does but the device owner has the power to remove it), it is clearly unreasonable to hold Apple to a different standard, and in any event would be ineffective because the "criminals" (or anyone desiring privacy) could just use non-Apple devices.

1. Even if you are willing to pass a law against general purpose computing, if you have a backdoor that can't be updated you have both a security vulnerability (because flaws can't be fixed) and an ineffective backdoor (because if someone e.g. loses their keys there is nothing you can do about it).

2. If you have a backdoor that can be updated, whatever process is used for updating the backdoor is now a security vulnerability. Having the updates signed by the Director of the FBI and the Pope provides no security because the Director of the FBI and the Pope have neither the expertise nor the time to understand what it is they're signing, and a rubber stamp enforced by digital signatures is still a rubber stamp.


In the wikileaks era, when large governments cannot keep safe even their most private documents, you rely on multiple entities keeping their key private, including manufacturers (which we know are routinely compromised by governments, as demonstrated in the DigiNotar case and elsewhere). Good luck with that.

Also, what happens once a key is leaked? Are we supposed to throw away every phone and every computer?


This scheme is only broken if all entities fail to keep their keys secure. In the event of a key compromise, updated keys could be pushed out and used to reencrypt. Of course that presents another weakness. (Who signs the update? What if their key is compromised).


"The courts" are basically a sieve (every newspaper hack learns this on his first day on the job), so you can count that one out very quickly. Manufacturers are only slightly better, and we've seen it with DigiNotar et al. That leaves the FBI/government, not exactly the strongest fort when it comes to public leaks. In that sense, Apple's marketing is right: the only person you should trust with the safety of your data is yourself.

Also, if you allow firmware to reflash its keys, then you have a mechanism that again can be subverted (as well as what you point out). I think we've seen it with games that that sort of DRM doesn't work in the long run -- it's routinely cracked, but manufacturers don't care too much as long as it allows them time-windows long enough to make bank. Making it the basis of all data-handling in the land doesn't seem particularly smart.


But that's a bit of a different argument that the hand-waving Keybase were using.

Saying "it's not possible to build a secure backdoor" is a cop-out.

The truth is, we can build systems so that your data can be decrypted with either your key, or a master key. And we can build systems where the master key requires multiple coordinated parties.

Such systems can be used to implement secure backdoors, but only if we can trust those parties with the master keys to do their job properly.

That requires:

- Proper key security so their keys don't get exposed

- Proper access controls so that the keys are only used for the purpose for which they're intended.

The problem is that we know with complete confidence that all of the parties involved in that process will fail at one or both of those hurdles.

This isn't a technical problem. "Secure golden keys" are technically feasible, and not even particularly hard. It's simply an issue that there is no one that you ought to trust with such a key.


It's not a cop-out because any such system will include its environment.

Your alternative is basically saying "it's theoretically possible, but in practice impossible". If so, the theory is incomplete: it fails to account for the human factor.


If the key is stolen, there's a good chance nobody will realise for months/years. Nobody will issue an update either. Also if some key is compromised, there's a good chance some organisations would rather keep that secret rather than announce the fact.


1) We have to trust the 3rd parties with the master key, not just now, but forever. You can't "un-give" them master key access, nor has anyone proposed such a system. The idea is already unworkable for that very reason.

2) If you have jurisdiction over someone, you can force them to decrypt something based on a warrant, so it isn't even desireable for a backdoor to exist. Rather than hoping they follow the rules in their ultra-secret room that normal people will never see, they can work through the court system, publicly.

3) There's a concept called "attack surface." A backdoor increases that, by increasing the number of secrets.

4) The article goes into detail over the real-world failures of systems like these.

5) The device manufacturer does not necessarily have our interests at heart and should not be the party contesting warrants in any reasonable system.

6) Even if the encryption scheme was sound, human error trumps all. We're introducing all sorts of elements, each of which is a point of failure, and none of which has interests aligned with those of the end user (us). OTP is provably secure, you know. It's the key management that gets you.

So we haven't demonstrated the need for this vs. compelling someone via warrant to decrypt and other means. We have good reason to believe that illegal, dragnet surveillance is the real reason for opposition. Even if you believe that our own government is (and always will be) perfectly virtuous with that power, there are many foreign governments with similar capabilities. Not sending them any traffic? Think again (BGP hijacks). And we have lots of experience showing that such systems leave people vulnerable to bad guys.

So this asks us to take on big risks for bad reasons and we know that. There just aren't any good reasons not to dismiss and oppose such systems outright and there are many good reasons to be suspicious of the motives of anyone telling us otherwise.

I blacklisted WaPo some time ago as not being a credible news source. This editorial only gives me further reason to keep them blacklisted.


> If you have jurisdiction over someone, you can force them to decrypt something based on a warrant

A lot of people in IT seem to be blind to this. Encryption is no different from a very good hiding spot. If the authorities know it exists and you know how to access the information within it, there are ways to get you to disclose that information. It's not like they will just let you walk away if you simply say "no".

And that's just for the normal "criminals" argument. For "terrorists" there are even more ways to get them to disclose information. Rubber-hose cryptanalysis is a thing.


And although this makes collusion more difficult, how does your plan prevent it? How can I, with my password and keys in my own controls, prevent rogue FBI agents colluding with corporate employees (through brides, blackmail, intimidation) and frightened judges ("can't let the terrorists win!" even though no one's convicted yet) from accessing my data that ultimately they have no business accessing?

We've seen law enforcement abuse every form of power they've been handed (see National Security Letters for one example), so I don't trust them to Do What's Right. Ever.


Sometimes it seems like the entire Snowden debate is ultimately reducible to one's belief about this. In a phrase, one's belief that given N people with power P, q will eventually abuse it.

I think that as N grows, the probability that q >= 1 approaches 1. It strikes me as a self-evident feature of human nature.


Actually I think power is more a force than a proclivity... Given the infamous Stanford prison experiment and now this, I wouldn't classify it as 'definitive' but the correlation is well demonstrated.

[source] http://www.sciencedirect.com/science/article/pii/S1048984314...


> through brides

Russian brides I suppose, in the context of this discussion?


I know you're probably just making a joke here but if not I believe he meant to say "bribes".


You don't prevent it. That's the thing about security - there is never a such thing as 100% secure. The key is coming up with reasonable expectations and implementing policies to mitigate the risk down to an acceptable level, [EDIT]then putting measures in place to limit damage if those measures do fail[/EDIT]. If I were a third party with access to your device, I could brute force the encryption. There is nothing you can do about it - I will eventually get your password; the trick is whether it will take me a few minutes or a few decades. What's an acceptable time frame to protect the data on your personal cell phone? Do you care if I get the personal data on it 20 years from now or have to spend $2 million in computing power to get it?

Unlike me stealing your data, there are very legitimate reasons for law enforcement to have access to devices to carry out investigations. The warrant requirement is a check on abuse. If there's an issue regarding abuse of power (e.g.: bribing employees, intimidating judges, overly broad laws), it would be better for everyone to work to address the real problems than taking away means for law enforcement to investigate actual criminals.


Here's why it wouldn't work: Non-US consumers, let alone companies, (which is to say, the majority of the smartphone market) have no interest in purchasing devices that can be rooted by the US government.

Correction: it could work, in theory, if US companies were willing to sacrifice massive amounts of revenue for no reward.


Then the device manufacturer can make an international version. The US Congress could pass a law requiring all of some certain class of devices implement something to allow law enforcement to decrypt, but that wouldn't prevent the device manufacturers from selling a different version to their international customers.


Yes, but how would we trust them if they are already selling spyware infected phones in other countries?

The making of phones is going to get commoditized - everyone could make a phone in a few years, from parts and using open source software. Then we will have choices and never have to use spyware again.


Doesn't that just mean that as a wannabe criminal I buy my iPhone from some no-extradition-treaty country, safe in the knowledge that the FBI's keys aren't on it?


Each and every government on the planet would ask that their public key is inserted on each device/server. Even if you naively trust your present and future government, do you trust all the others?


You are blindly assuming the FBI and the courts are good guys, non-corrupt, and consistent. This is not borne out by evidence, to borrow a phrase from another commenter.


Others have given very good overall arguments against this. But let me give one specific technical problem with your proposal.

The gov't will be able to get the encrypted input for the device mfr, and the output decrypted from that. Given this data, they will have a huge headstart in backing out a solution to what the mfr's private key is.

Even if this weren't so, it makes every major vendor a very cost-effective target for the government. They'll devote resources to either breaking their private key (as I described above), or just using straightforward human engineering techniques. This apparent extra layer of security will not last long in practice.

EDIT - in my 2nd para above, it assumes that a mfr has a single private key, which is probably not the case. But generating a unique key pair for every discrete device is both an expense problem and introduces difficulties with managing the set of keys, which makes its security even more difficult.


Let's not try to doublespeak our way out of this: a "backdoor" that is only in your mind and heavily hashed everywhere else is not a backdoor -- it's a front door of which only one person (you) holds the key.

A backdoor is a backdoor is a backdoor: an entrance on which you have no visibility nor control. And this is exactly what a "golden key" would be, regardless of how it's implemented.


A backdoor is nothing more than a second means of accessing a system. We're already used to having secondary measures in place to access our accounts for our own use (i.e.: password recovery). There is no 100% solution - our own primary passwords are sometimes stolen, and password recovery measures are regularly abused as well. The issue really boils down to allowing authorized users to access the systems and reducing the risk of unauthorized users to get in. Like it or not, law enforcement officers with a valid warrant to search your device are authorized users. It's in the constitution. Once you get past that, the thing to look at is how do reduce the risk of 3rd parties or law enforcement officials without valid warrants gaining access.


> password recovery measures are regularly abused as well

> how to reduce the risk of 3rd parties [...] gaining access.

Don't you see the contradiction there? Our systems are already full of holes, so let's create some more...?

You also assume I'm a US citizen, which I am not. The minute the US mandate backdoors to manufacturers, all governments will want in, including some very nasty customers. So what do you do next, have separate keys for each government and country-based firmware, i.e. even more holes? It's a slippery slope.

"The thing to look at" is how to remove the capability for third parties to access your data, regardless of who they are. Say that tomorrow I'll wake up in Nazi Sweden (not terribly unlikely, seeing current Euro trends), I'd rather not have brownshirts with keys to all my data - would you?


I actually don't see the contradiction - our security systems are full of holes largely as a matter of convenience. We put in password recovery because we routinely forget passwords. We make stupid password recovery processes that involve mothers' maiden names because it would be too much of hassle to do something more secure like show up at Google HQ with three forms of ID. The police don't have the same usability requirements. They only need access to the data on your phone once; they're not opening it every 10 minutes of every day to check your SMS/e-mail/HN posts/etc. In addition, they also have the requirement that they can't be opening your device without a valid search warrant. Any system put in place to allow them access needs to be designed with conditions like these in mind.

With regards to other countries, manufacturers are already required to comply with all laws in any country they operate in. I don't think Americans should have to make a decision as to whether or not their own law enforcement should have the ability to decrypt phones based on what Chinese police might do in their own country.

If you're worried about brownshirts taking over your government, there's nothing preventing you from encrypting the data yourself - nothing prevented you from doing it before. There are legitimate reasons for law enforcement to search a device after being issued a valid warrant; same as your house, place of business, safe deposit boxes, etc. Strong encryption by default on hundreds of millions of devices on which people conduct much of their daily business is something new. Locks and safes will slow down an investigation, but never to the point of bringing it to a halt. Strong encryption will. Given how central these devices have become in our lives, I don't know that it makes much sense to prevent police from searching them when they are legitimately investigating an actual crime.


You are severely misinformed, both about the constitution and the technological implications of the position you are blindly touting.

(1) The Mass. Supreme Court ruling/precedent can compel a suspect, with proper legal protocol, to decrypt a device and is NOT an infringement on one's constitutional rights. This makes the entire argument of "responsible disclosure" null and void.

(2) You seeing security holes as "forgot password functionality" shows how uninformed and inexperienced you are. When you start seeing everything from CPU fans (measuring audible signals to detect an encryption key) to the latest un-patchable USB bug (arbitrary code execution via invisible controller-chip firmware alterations) as attack vectors you will begin to realize just how wide the potential 'surface' is. Care to venture what could happen if someone were to, say, reverse engineer the baseband and had access to all firmware functions of a device within an environment where the decryption key was being used? A "golden key" is simply creating the largest historical hacking bounty, and a fool would think such keys could be kept secure indefinitely.

When you are admittedly an amateur and experts are telling you this is a bad idea, you should listen. Whether human or machine history has PROVEN that there is no "perfect system." Introducing unnecessary holes to accommodate bureaucratic pedantry is wholly unnecessary, and I would venture to say, idiocy.


(1) I provided a link to two federal cases elsewhere on this thread that say otherwise[1]. Would you be kind enough to post a URL backing up your assertion.

(2) I'm not even going to bother responding to this one since you apparently couldn't be bothered to phrase your argument without an ad-hominem. If you care (which I doubt), I've already responded to a similar argument elsewhere on the thread.

[1] https://news.ycombinator.com/item?id=8430501


>> Like it or not, law enforcement officers with a valid warrant to search your device are authorized users. It's in the constitution.

It's not, of course, literally in the Constitution that you cannot encrypt things and then refuse to decrypt them for law enforcement. Nor is it in the Constitution that people selling encryption devices must include backdoors.

What words in the Constitution do you see as pertaining here?


I'd say the entire part of the 4th Amendment that deals with warrants is what pertains to this issue.

Of course you can encrypt things yourself, and I would imagine you could probably refuse to hand over the password under your 5th Amendment rights. Nothing's changed there.

I think the issue here is that someone else (Apple) has gone and set up an encrypted environment for you in such a way that actively prevents law enforcement from carrying out legal investigations with a warrant on an incredibly popular series of devices that they were capable of searching before. Let's face it - the number of people who are buying iPhones because they are now encrypted is miniscule. If you really wanted to encrypt all of your data, optional full disk encryption already existed on other phones. The problem the police have with it is that it's now set up by default, they can no longer access it when they do have a valid warrant and this extends to everyone instead of just the handful of criminals that would take extra steps to encrypt their data. John Q. Criminal didn't choose to encrypt his phone, Apple made that decision for him.


> The problem the police have with it is that it's now set up by default, they can no longer access it when they do have a valid warrant and this extends to everyone instead of just the handful of criminals that would take extra steps to encrypt their data.

Not at all true. The fifth amendment doesn't mean the court can't serve you with a subpoena for your device's unencrypted contents and imprison you until you provide it, for life if necessary.


There are a few federal cases that say otherwise. In US v. Kirschner[1], it was ruled that forcing the defendant to reveal his password through a grand jury subpoena was a violation of his 5th Amendment rights. The prosecution used a loophole for In re Boucher[2]: though the defendant still had a 5th Amendment right to not reveal his password, he was required to decrypt his hard drive in order to produce files under subpoena that he had already admitted were in his possession. Because he had already incriminated himself by revealing to a border patrol agent that his laptop contained child pornography, he no longer had a 5th Amendment right against self-incrimination for the charges of possessing child pornography.

[1] http://cyb3rcrim3.blogspot.com/2010/04/passwords-and-5th-ame...

[2] http://cyb3rcrim3.blogspot.com/2009/03/5th-amendment-bummer....


So what you are saying is that courts have ruled in some cases that the 5th Amendment protects people from being forced to reveal their passwords. To me that makes the case for a backdoor even weaker.

Why give law enforcement the ability to arbitrarily circumvent these protections? If the court says, "No, the defendant doesn't have to give it up." Why would you want to have a mechanism in place for the prosecutor to say, "Eff the court, we're taking that information anyway."

I'm no lawyer, but I suspect any evidence gathered that way would be thrown out anyway.


No one is advocating for the police to be able to circumvent the courts or gain access to information that the defendant has made a conscious decision to protect. You've always been free to protect your property in whatever fashion you choose, and likewise not reveal how to gain access to it. If you store your drugs, stolen property and murder weapons in a safe, you're not required under the 5th Amendment to give the police the combo and you're free to choose a safe offering any degree of protection. You have no 4th Amendment right to stop the police from drilling the safe if they get a warrant permitting it.

Likewise, you've always been able to encrypt your phone/computer/etc., and for the most part the police aren't able to decrypt it so long as you encrypt it properly. Nothing's changed in that respect. It's not a common enough phenomenon that an inability to decrypt some laptops will affect most cases.

The iPhone issue is different, though - it's not the user choosing to encrypt the device; it's Apple choosing to encrypt the device on the user's behalf. This won't affect the search of just a few suspects' property. Due to the popularity of the iPhone, it will likely affect conducting searches for a significant percentage of cases that wouldn't have been a problem before the last update to iOS. The police already needed a warrant to gather any evidence off of your cell phone, anyway; now Apple has gone and effectively stated that the warrant doesn't matter, the police don't have the right to search it to begin with.


We mitigate through compartmentalisation: Instead of one password for everything, each user has different username/password combinations, and hopefully different passwords for most services they access.

That is the key point here: The moment you introduce a backdoor you have painted a target on the parties that needs to hold those keys big enough to be seen from space.

Meanwhile, with compartmentalised keys, some users accounts get compromised all the time, but no single leak is sufficient to get everyone.


Author of the post here. (Thanks, Jeremy.)

There are many details I avoided in the essay that people will get sidetracked on. Such as whether Apple is actually doing it right or not. This is discussed and speculated on elsewhere. And right now I'm far more worried about a world in which they're legally prevented from trying. When the FBI starts talking about our kids getting kidnapped, I think bills start getting drafted.

There are tangents I think HN would've cared about, but which would've been a distraction. For related controversy, Keybase - which I work on - lets users symmetrically encrypt their asymmetric keys, and store that data on our servers. We also let you do crypto in JS. Many hate this! Many love it! But I want to be legally allowed to write this kind of software and release it without a backdoor.

It's hard enough when we all fight about implementation details and security vs. convenience. It'll be a real crap show when we know for certain that security has lost. I'll go program video games or something. Or just play them.


> I'll go program video games or something. Or just play them.

[post-post-edit[[The funny thing is, I wrote this before reading the last portion of your post. No joke. I'm pretty sure the more our privacy is invaded, the more singular an entity a collective mind becomes. (half joke).]]]

It still doesn't fix that I feel like we are one step away from the complete loss of thought privacy. I'm 28. I grew up essentially connected to a computer. Half of my personality exists always, in a computer. My entire social life is on the computer, for the entire history of my history. The image I see my existence as is literally half mechanical (I study a lot of math and computer science and code a lot) and half physically organic, and potentially half abstract mental 'thought idea stuff' (extra halves for pointing out that the world is a wibbly, world place).

Should the government have been allowed in the past, to have meticulously documented my childhood, my adolescence, my adulthood? Should they have access to my baby photographs? To my first boyfriend? To our first conversations?

How does that affect who I choose (or chose) to grow into? Is every step I take a step out of a fear of not conforming to both the present ideal of 'lawful person' but also to all worlds of the potential future ideal? When did I become aware of this, as a child? When did I decide I was cognizant enough to have made this decision? Was it when I turned 18, or was it when technology reached a certain age for the entire population?

Do I have to think about every possible way that people can perceive my actions, in order to make sure I stay the way I see myself - which is fundamentally on the basis that I am a good person?


> Do I have to think about every possible way that people can perceive my actions, in order to make sure I stay the way I see myself - which is fundamentally on the basis that I am a good person?

This is the money quote for me. No matter how hard I try, I can't seem to get this through to people: No, I'm not a criminal now. I don't plan to become a criminal in the future. I control my own actions, but I don't control the how my actions are perceived by other people.

Should I be arrested because some combination of my purchases today and words written when I was 17 point to me committing a terrorist act? More to the point, should I have to worry about what someone is going to think if a computer flags my metadata as "worth investigation" and they start trawling through the entirety of my online life? What effect does that have on general conversation on the internet?

I think the thing I'm most afraid of is how do I explain this to my kids? In 10 years when they start forging their own life online, how do I sit down and explain that everything they do is monitored, and they need to be careful about everything they say and do on the internet? How do I explain that it didn't used to be like that, but we failed to stop it? It sounds melodramatic to type it out, but we've had an unprecedented amount of freedom to communicate with people around the world, and now that's being taken away.


> we've had an unprecedented amount of freedom to communicate with people around the world

No we have not. We thought we did, but authorities knew better. Remember the cipherpunk crackdown, the crypto wars, NSA tampering with standards etc. They were watching us quietly enough.

The real "freedom" existed for a very tiny period between email/news/irc existing and them registering on The Man's radar, which happened fairly quickly (and no thanks to pop media like War Games for popularizing the idea of hordes of teenagers intent on vandalizing essential network infrastructure). The standards developed to respond (PGP, SSL etc) took 15 years to actually get traction (and I bet you're still not using PGP), and as soon as they hit the mainstream, they were attacked and subverted from inside the organizations trying to implement them on a large scale.

The freedom we thought we had was a myth. I think this is an important concept to keep in mind, going in this inevitable political negotiation with authorities. They lied when they told us we were free, and they will lie when they'll tell us that "golden" keys will only be used for good. Like you don't talk to the police without a lawyer, you shouldn't hand your encryption to them without a Schneier in the room.


It's really sad that this is the case. It's hard to not be pessimistic about the situation when it's put like this. It's also really hard to explain to people why trying to protect this sort of freedom and privacy is important.


> What does a scanner see? he asked himself. I mean, really see? Into the head? Down into the heart? Does a passive infrared scanner like they used to use or a cube-type holo-scanner like they use these days, the latest thing, see into me - into us - clearly or darkly?


Interesting questions. I think that if you try to answer those questions, you've got an interesting essay right there.


And if you get carried away with that essay, it'd probably make an interesting dystopian near-future science fiction story...


The Powers that Be are conducting a coordinated disinformation campaign to keep and expand their backdoor access by scaring the bejesus out of the public.

Apple will become the phone of choice for the pedophile.

What happens when they blame strong cryptography for the next gut-wrenching tragedy involving innocent kids and a bombing / school shooting / kidnapping / pedophile?

Your arguments will be swept away by a tsunami of grief, hatred and blind patriotism. Is it possible to make an equally emotional argument in favor of the freedom to be secure?

Secure information deserves the same prestige and protection as the Four Freedoms.

https://en.wikipedia.org/wiki/Four_Freedoms


I hate to make this kind of argument but it isn't hard:

If a hacker gets access to this golden key, they can use data gleaned from your child's phone to stalk or hurt them.


I was about to say almost exactly this. A country in which encryption requires a golden key is the country all pedophiles will want to move to.

Also, I agree that it's shitty, but sometimes you can only fight fear tactics with scarier fear tactics.


The problem is that the general public is going to take "unbreakable encryption" at face value. So don't worry, unbreakable encryption prevents the hacker from getting access to the golden key, because without the golden key even the government can't break into things. The whole thing feeds on itself.


So here's what we do guys: put the golden key in the black box!


Aren't there already documented cases of domestic abuse enabled by Find My iPhone? How much would you bet that whoever knew about and exploited the bug that leaked all those private celeb pictures didn't also have access to all the power of Find My iPhone as well?


Yes - either that, or naked photos of you on the internet. Imagine hackers taking every picture in your camera roll and posting it to the internet.


That is likely the tragedy of freedom: There are no emotional arguments for it.

It seems to me, politicians and other people have long sought for the most emotional arguments to wipe away the wings of freedom -- and thus making the victims of pedophiles, bombing and kidnapping victims again!

(BTW: In Germany, pedophiles where also used as argument for internet censorship. Organisations of those victims protested against these attempts, because they clearly saw, that this legislation did not help (and where never intended to) those victims or help to prevent anything.)


"What happens when they blame strong cryptography for the next gut-wrenching tragedy"

The topic changes but the story never does. We still have rock music, teenagers, acid, dungeons and dragons, hippies, video games, atheism, firearms, meth, Islam, gays, texting while driving, basically all the good things in life, and we'll always have thousands of things in the future that are not cool with the conformist old people and will be blamed for everything bad whenever possible, and aside from a lot of journalist click bait nobody will really care.


Uh, is it just me, or does "texting while driving" not belong in that list? Do you really think people care about it because they're "conformist old people"?


Also worth noting that in the case of cryptography, virtually no one "blaming strong cryptography" for anything actually understands much about complex cryptographic protocols aside from what they know from watching television.

It's even more of a whitewashed canard than those other things you listed.


Man that brings back fond memories of being a teenager driving down the highway blasted on meth. Thankfully texting hadn't been invented yet.


...the next gut-wrenching tragedy involving innocent kids and a bombing / school shooting / kidnapping / pedophile?

It's frustrating that TPTB don't actually care about these cases at all, apart from their political potential. Amber alerts are usually issued in response to non-custodial parent custody disagreements, which phenomenon is 95% driven by divorce lawyers, and involves very little risk that isn't created by the police themselves. In the rare actual stranger-danger case, "authorities" routinely allow multiple hours to pass between the kidnapping and issuing the alert. Of course by that time the child is beyond help.

http://www.kansascity.com/news/government-politics/article34...


Complete nonsense - you're equating imperfections that result in failure of a system with complete disinterest. For each anecdotal example of failure, as you cite here, it is equally easy to find anecdotal examples of success, eg: http://edition.cnn.com/2013/08/13/justice/amber-alert-hannah... I don't see any evidence here for your contention about the proportion of alerts issued in divorce proceedings, or a systemic pattern of indifference on the part of law enforcement.

Also, the government spends quite a lot of effort on the investigation and prosecution of child pornography, as documented here, for example: http://www.justice.gov/usao/waw/press/categories/childporn.h... To what degree this should be a factor in encryption policy is open to debate, but your charge that 'TPTB don't actually care about these cases at all' is baseless and does a disservice to the people who investigate such crimes.


What happens when they blame strong cryptography for the next gut-wrenching tragedy involving innocent kids and a bombing / school shooting / kidnapping / pedophile?

Pedophiles (or more specifically: child pornographers) routinely use strong cryptography and effectively nobody cares.


The point the OP is making is that nobody cares because nobody has been told to care. All it takes is a well-timed news article on how some Bad Guy used Irresponsible Unbreakable Crypto to change that.


Do you believe that your beliefs are dictated by single "well-timed news articles" directed by "the powers that be"?


Dictated? No. Profoundly influenced? Yes, yes I do.

I think that I'm more than capable of detecting most forms of propaganda, but I hold no illusions as to my ability to judge _everything_ that comes my way in a fully informed manner.

I can look back and see cases where too little information, bad information, or even just a misunderstanding lead me to believe in things that just weren't true. And I'm pretty sure I'm not the only one.


It's not about the technically educated minority (though even they are vulnerable to falsehoods perpetrated via propaganda), but the vast majority of people who aren't aware of how computer security works and believe that the government is always working only in their interests.


They can be, thats what the whole concept of memes is all about. It happens.


Encryption doesn't kill people, people kill people.


99% of all terrorists and rapists use shoes.

Do the math. Shoes = Evil.


Don't be alarmed, we have an entire agency devoted to that now.


Some feedback for you: Thanks for conveying this in an easy-to-read style with simple examples.

This makes it far easier to send to our non-technical friends, who struggle to understand what this is all about.

Unfortunately, it always seems like these kinds of tricky technical issues are communicated by the mainstream media using oversimplified, easy-to-understand language (often, in support of government entities) and, on the other end of the spectrum, us nerds end up talking about it, by default, in unapproachable language and format.


Fantastic article, and a case well made. Thank you for writing it. I have one small criticism: "On June 20, 2011 Dropbox let anyone on the site login as any other user". This can easily be misconstrued as though it were a malicious free-for-all open-access day sponsored by Dropbox, which would be an unfair perspective upon what was obviously a very nasty security bug.


Thanks for your comprehensive response to a genuinely chilling editorial. The way WaPo used a synonym to dismiss the whole issue doesn't inspire confidence. Surely the closest analogies, though, for this mythical "golden key" are the many cracked DRM technologies. These weren't weakened by government fiat, but were still rendered worse-than-useless quickly.


Your article didn't address this argument in favour of a 'secure golden key': If the user forgets their password, they can still recover their data.

My guess is that the vast majority of the general public trust Google to keep their data secure more then they trust themselves to remember a password.


The Ultimate Godwin reason.

The French people, presumably, trusted the French govt. in 1936 when they filled in their census data.

Big Data is not a modern phenomena. It was invented in the 1930s by IBM and the Hollerith machines. The success of tabulating 40+ million Prussians was repeated across Europe.

The French census data was bravely protected by prevaricating French administrative staff who had realised the consequences but delay turned into eventual capitulation.

http://www.ibmandtheholocaust.com/

You cannot trust your future government, it could be anybody, with any agenda.


I think this is a big point that gets overlooked. The question isn't really whether or not the current powers are trustworthy with your data. It's really whether every possible steward of your data is trustworthy over an arbitrarily long time-span.


Not just governments, but corporations as well.


At some point the difference ceases to matter.


I think that's a really interesting point that isn't adequately considered. It's also surprising that there isn't more liability and therefore risk associated with systems that make mass breaches (Target, Home Depot, J.P. Morgan, etc.) possible but I suppose no one's particularly concerned about consumers.


At some point the data could be considered more valuable than the original business activity and some organization makes moves to gain legitimate control of it.


The rhetoric that a 'Golden Key' will only be used by the 'good guys' is like thinking we can create a 'Golden Gun' that will only shoot the bad guys.

The two issues I see are:

1) The bad guys might still be able to get the key (as the article discusses)

2) We don't know who the 'good guys' are. It's very comforting to think that there's some mysterious superhero who will only fight for good but this ignores the large issue: no one can really agree on what "good" is.


2.5) We can't agree on what "good" is because there is no unified 'good' side here; there are a giant pile of adversaries. Everyone will say they're the good ones and everyone else is bad. As one obvious example, the intelligence services in one country won't necessarily trust those of another, and vice versa. The only sane implementation is to use the definition of 'adversary' provided by the person whose data you're protecting. Anything else would be broken.

This is the same key escrow battle we had when cryptography was first propagated: "Oh no, it actually works, and we can't trivially break it, it'll be the end of the world!". Only this time, the battle is "Oh no, it actually works for people who aren't technically savvy, it'll be the end of the world!".

The answer is the same now as it was then: no, go away, we're building solutions that work and you're asking us to make them broken.


This is great.

Still, if we start arguing that maybe your government is the bad guy, certainly we won't convince the government. Also we won't convince the people who blindly trust the government.

Therefore, it might actually be most productive to avoid point number 2, as valid as it is, and take as premise that the government & Apple are both currently good guys. And then still convince a secure golden key is a bad idea.

With this in mind, the closest I dared to get was the discussion of the future. Governments do change. Whoever has access to your golden key might be a bad guy tomorrow, or invaded by a bad guy.

Btw, I even took out a whole section about questionably lawful warrants before publishing. For this reason.


0) A backdoor is still a door; it can be used to enter your system, whether you allow it or not, whether you've shared the golden key or not.


As much I see today, there are no good guys at all in the world.

There are only some guys, who try to be good, but so many fail.


Thank You!

It's very depressing to see constant regurgitation of cops-and-robbers morality.

The FBI has deep historic ties to the Pinkertons, and has historically spent at least half of its resources (conservatively) on anti-radicalism. The DEA is so deeply interwound with the CIA that half of the drug dealers in the world are protected and immune.

Local police forces are the only entity that is serious about "fighting crime," and their trustworthiness is inconsistent from region to region, to say the least. When you consider that the largest, most economically damaging crimes are being committed by banks and hedge funds, we can basically conclude that society is preyed upon by forces that have no counter.

Instead, we're arguing over a fantasy that somehow we need the police and FBI to protect us, thereby meaning we have to give up whatever meager protections we have against them.

I'm no Apple fan. In fact I'm still incredulous that this really means what they say it does -- that iDevices are secure by default. But if it's actually true, then freaking HOORAY FOR APPLE. Enjoy it while it lasts.


You are welcome.

  we can basically conclude that society is preyed upon by forces that have no counter.
That is, what distresses me also! It is more and more uncovered, that there is no moral anchor anymore in our societies -- and where reference is made to a higher being (as on the Dollar bill), situation is even worse. In Germany, we have a "christian" party with "christian" explicitly in its name -- but that is also the party, that is most ensnared with lobbyists and corporation interests and cares the least about people. It is just a tradition, to brainwash the people.


Golden Key vs the Golden Gun needs to be a meme. If this were to happen, then everyone (more importantly, laymen) would understand why it just doesn't work.


3) there could be a Snowden in any secret room at NSA. Even the "good guys" might not be so submissive. You don't know what gets leaked.


This Golden Key idea is one of those classic "I'm just an idea man, I don't have to figure out the nitty gritty details!" claims.

Everything is easy if you can leave figuring out the intractable problems to someone else. When you actually sit and think about this for a few minutes the problems quickly become apparent.

The closest real world system to this is DRM on games consoles (e.g. XBox 360). They use secret "Golden Keys" to protect authoring but still need to allow users access to that same information.

However the way DRM worked in that context was a combination of hardware (e.g. roll-back proof) and tieing the whole thing into a mandatory update system (i.e. no games without updates).

That DRM actually got cracked several times, but because updates were required they could just ignore it and move on. The problem with the Golden Key suggestion here is that once your data is compromise it is game over forever. There's no do-overs as there is with DRM systems.


> This Golden Key idea is one of those classic "I'm just an idea man, I don't have to figure out the nitty gritty details!" claims.

Most definitely. And even the way it was proposed at the end of the original WaPo editorial was very nonchalant:

> However, with all their wizardry, perhaps Apple and Google could invent a kind of secure golden key they would retain and use only when a court has approved a search warrant.

Let's all be sure to remind the "non-technical" public that there is no wizardry involved here. It's logic, mathematics, and science and some things are possible and some things just are not. Encryption in this sense is very black and white: either your data can be decrypted by a 3rd party or it can't. And if it can be decrypted by one 3rd party, you must assume it can be decrypted by any 3rd party.


>>The problem with the Golden Key suggestion here is that once your data is compromise it is game over forever. There's no do-overs as there is with DRM systems.

For the game consoles there aren't really feasible do-overs either if the keys at ring 0(the earliest security event) are revealed. As in, the only path to recovery is to physically replace every game console out there.

http://www.ps3hax.net/2012/10/marcan-fail0verflow-about-lv0/ >>The effect of this is essentially the same that the metldr key release had: all existing and future firmwares can be decrypted, except Sony no longer has the lv0 trick up their sleeve. What this means is that there is no way for Sony to wrap future firmware to hide it from anyone, because old PS3s must be able to use all future firmware (assuming Sony doesn’t just decide to brick them all…), and those old PS3s now have no remaining seeds of security that aren’t known.


I'm surprised that the Clipper Chip wasn't mentioned. That was an attempt to have the government hold the (a) key for encryption hardware. It failed for obvious reasons - too big a hole, and no one trusted the NSA (correctly, as it turns out).


I remember the Clipper Chip well - it was one of the first issues to introduce me to the EFF and the (still growing) war against general purpose computing.

Regarding the Clipper Chip, there is an observation that I haven't seen talked about in these discussions. There is an important difference in the NSA's tactics with their Clipper Chip proposal and their more recent activities: while technically and socially flawed, the key-escrow plan at least attempts to respect the constitution.

By keeping the keys at a 3rd party, in theory the NSA was limited to what they could ask for. Automatic rubber stamped warrants may allow them access to a lot of data, but it (again, in theory) would have been tiny in comparison to their current dragnets.

At first I took this as implying a change in the goals or leadership at the agency; a lot can change in a decade. But then we were introduced to BULLRUN and how it may have targeted[1] IPSEC. Given that the IPSEC working group was started in 1995, this suggests the NSA may have been trying to subvert opportunistic crypto during the Clipper Chip imbroglio.

While it is speculation, this could raise the possibility that the Clipper Chip was just a distraction, while NSA's eye's were on the larger goal was the crypto standards themselves. Of course, any proof, if any, of this is buried deep in the NSA and not likely to see the light of day anytime soon.

[1] http://www.mail-archive.com/cryptography@metzdowd.com/msg123...


"this suggests the NSA may have been trying"

Well, we know that an NSA contractor on the IETF removed opportunistic crypto from the PPP protocol. See http://cryptome.org/2014/01/nsa-rep-dirt.htm

I think it would be prudent to believe that the NSA was systematically throwing up road blocks then, and it would be prudent to believe that they're doing it now. I can't even see how such a claim would be controversial. It is their job after all, at least given the presumption that foreign bad guys use and rely on domestic technology. And we know that the NSA makes that presumption.


Agreed. That was my first thought when law enforcement started getting press for screaming "think of the children". I'm surprised and disappointed that people have apparently forgotten recent history.

This is why I greatly respect historian as a profession.


Another reason why Clipper Chip was a Very Bad Idea is that, when strong encryption without back doors is outlawed, only outlaws will know how to make secure crypto. It would have created a War on Forbidden Knowledge. Brrrr.


Also, what could the U.S. government do when this knowledge is so widespread and in so many forms at this point? Take down notices to github? Cease and desist orders to Linus Torvalds? The encryption cat is out of the bag to a certain degree. Legislation along this line would simply further harm U.S. business interest in the global marketplace, even if it were desirable.


The Post's proposal is so obviously ridiculous it's almost difficult to discuss fruitfully. The fact is, it just doesn't really matter who has access to certain individual's security; as long as somebody other than the owner has access, than privacy is violated. There's no such thing as half measures when it comes to security and there's no such thing as a semi-permeable security "membrane", so to speak.

A story: The Belgian census bureau in 1930-31-32-33 conducted a nationwide survey, noting down, among other things, people's religion. Although this was done innocently for records' sake, when the Nazis invaded the country, they had access to a perfect registry of Belgian Jews.

The takeaway is that the "good guys" won't always be in possession of sensitive information, no matter how good they are. Safety comes in people's ability to manage their own information, not in their ability to manage others'.


> If Apple has the key to unlock your data legally, that can also be used illegally, without Apple's cooperation. Home Depot and Target? They were recently hacked to the tune of 100 million accounts. Despite great financial and legal incentive to keep your data safe, they could not. ... your data is only as strong as the weakest programmer who has access.

That's the main point here, which is excellently put.

Also, the graph by the bottom is great too -- the point that our data is becoming more and more intimate is really relevant. That's something really salient, which you rarely hear mentioned these days.


To me, that's not the main point. The main problem with a universal backdoor "so that the good guys could get in" is that the good guys with all their good intentions have no business pawing through your private stuff. It's not about having a good reason, or having a mandate. It's about your rights to not be subjected to warrantless searches, which is exactly what a golden backdoor would enable.

In other words, even if some genius devised a backdoor that was perfectly secure (allowed access only to those who were explicitly granted that access), had it verified by an international group of geniuses 10x smarter than she is to be sure it's actually secure, and granted access to only the most trustworthy organizations, which would of course act in the most transparent way possible with a clear cut mandate and individual accountability, it's still a violation of your privacy, a violation of your rights, and in the US a violation of the constitution.


Maybe the way to present that to laypeople is to say, "If law enforcement can unlock a child predator's stuff and look through it, then they can also unlock your stuff and look through it. All of it."

The last sentence is important. A large number of people have some area(s) that they wouldn't particularly care to have law enforcement looking at closely, even if they aren't overtly criminal in any way.


>"they can also unlock your stuff and look through it. All of it."

I have had this conversation. The response is, "I have nothing to hide. My life is boring." I have gotten this response from people I know have things to hide. One guy had cheated on his wife and was illegally selling marijuana. But he said "I'm a small fish. They have more important things to deal with."

For people who warned about NSA spying, they watched as the response went from "that's impossible" to "that's illegal" to "that's not happening" to "it's targeted" to "it's everybody overseas only" to "it's everybody but why do you care." What I learned is, many people are apolitical and they will just use whatever is the most convenient excuse that justifies continued inaction.


> in the US a violation of the constitution

The fact that there is a backdoor is not a violation of the Constitution. If it's only used with legal warrants, then it's well within the Constitution.


You are technically correct, which is the best kind of correct. On the other hand, the NSA has a secret court that, in secret, grants it warrants to wiretap anyone it wants. While that's a neat loophole, it should be closed. I don't think that dangling a carrot the size of a golden backdoor in front of the law enforcement and spy agencies will result in anything other than more rubberstamp courts and very questionable warrants.


I'm not in favour of the 'golden key' but saying that the existence of a golden key violates the Constitution by fact of existing is not true.


I should have been more clear, but what I am saying is that it's existance within the US will almost undoubtedly be used in violation of the Constitution.


You are technically correct about "technically correct", which is the best kind of "almost correct".


There's one point which I see far too rarely: even if the current people in power were all 100% trustworthy good guys, you can't guarantee the people in power 10 years from now are any good at all.

In the face of every other objection, THAT should be the one to attack proponents with. If you wouldn't give the powers to your worst enemies, you shouldn't even think about giving them to your current friends.


In addition to intimacy increasing over time, it's also a very continuous change. So people aren't really thinking about how it's changing. They're just doing it.


Its interesting that there are a lot of arguments against backdoors in the comments here that sound similar to NRA/gun rights arguments. Whatever I (or you) may think about guns: I don't think that is a bad thing. It may be a hint in how arguments for keeping strong crypto strong could be successfully made. Yes the NRA may go over the top and many think they're evil, but aside from cultural arguments, most of their arguments boil down to self protection, and they've been very successful making those arguments part of the US culture.


The other worrying thing is the American-centric view of these issues. "Foreign governments" seems to implicitly imply "governments outside the US".

As one of the 6.7 billion people on the planet that is not a US citizen or resident, to me, the US is a foreign government.

For American companies to gain trust of markets outside the US, they have to take this into account. Apple and Google know this very well. Want to sell to China? Germany? Brazil? If the US government wants to enforce a backdoor on US products, that's going to do tremendous damage to their companies ability to compete in the global market.


> If the US government wants to enforce a backdoor on US products, that's going to do tremendous damage to their companies ability to compete in the global market.

Nah, they'll just go "f*ck this" and share the backdoor with other governments (who will be extremely grateful). And then all these golden keys will magically show up on Iranian and Pakistani servers, Kim Jong-un will parade them, etc etc.


This old picture (from SOPA I believe?) sums up Washington Posts editorial board https://pbs.twimg.com/media/By8I4QGCcAELNPV.jpg


"A well regulated militia being necessary to the security of a free state, the right of the people to keep and bear arms shall not be infringed."

  - Second Amendment to the United States Constitution
As US law classifies cryptography as a munition, isn't it every US citizens right to use encryption?


ARMs.


Arms is short for armament.

Armament = Weapon.

The Department of Justice classified cryptography as a munition.

Munition = Weapon.

In law, it's always about the technicalities, down to the definition of words. Prove me wrong.


I'm not disagreeing with you, just pointing out that if crypto is ammunition, then the actual weapon is the computer. So this could be read as the amendment forbidding the government from taking away your computers.

It would be a fabulous interpretation, although I'm somehow skeptical that we'll ever find a court endorsing it.


I think you're getting armament, munition and ammunition confused. The US Justice Department classifies cryptography as a "munition" (aka a weapon), not "ammunition" (used by a weapon).


"Munition" is originally a word for "fortification", and was then expanded to mean "weapon" or "ammunition" interchangeably. It's a generic term, so you can interpret it both ways.


I could be wrong, but I think "ARMs" was intended as a pun referring to the ARM cpus that are commonly used in mobile devices.


It's just a well-placed pun. Apple's mobile phones use ARM-based processors.

https://www.google.com/search?q=arm+processor+apple


Do the guys who suggest these things like "Secure Golden Keys" actually talk to their own IT staff before coming up with something that stupid?

Seems like that could have been curtailed by a five-minute trip to the computer room and asking the guy with the longest beard.


Everybody knows IT people are just lazy and will always tell you it's impossible.

Everything anybody needs to know about IT people can be learned from watching Star Trek.


The problem is that a technical solution is probably an order of magnitude more complex than a bit torrent or bit coin system; and will take a bit of cross discipline imagination.


The US government is a bunch of hypocrites. They pretend to care about "cybersecurity", but:

1) don't allow you to actually secure yourself properly, because then you'd also be secure against them, and that seems to be a big NO-NO

2) with each new "cyber-threat" they seem to want to remove even more of our rights and liberties

So forgive me if the next time the US government yells CYBER PEARL HARBOR PEDOPHILES TERRORISTS - I'm not in a rush to believe them.


The problem with a 'Secure Golden Key' is that it will, in almost every case, result in the creation of an illegal number; e.g. 09F9 1102 9D74 E35B D841 56C5 6356 88C0

And, of course, these illegal numbers can be leaked.


Can you elaborate further? Specifically - what is an illegal number and why is it commonly associated with a 'Secure Golden Key'?


An illegal number is a number which is ostensibly illegal to know or distribute under legislation like the DMCA. In this case, the OP is referencing the cryptographic key used for DVDs, which the MPAA licensed to DVD player manufacturers. Since knowledge of it enabled the bypassing of encryption (otherwise known as "decrypting the content with the key"), the MPAA attempted to invoke the anti-circumvention clause of the DMCA to prevent people from publishing it.

Any "Secure Golden Key" would be a number which is the encryption backdoor key. Knowledge of that number would enable you to decrypt any content encrypted with that key, and if someone who were not a government actor were to discover that number, it would doubtless be decried as illegal to know, possess, or publish.


There was a legal battle in the early 2000s about circumventing DVD copy protections. The DVD CSS (content scramble system). The court ruling declared that the program was a 'copyright circumvention device'. It was illegal to distribute the source code. This lead to people creating illegal t-shirts with the source printed on it.

http://en.wikipedia.org/wiki/DeCSS

Later the same thing happened for Blu-ray. Only it was an encryption key that was discovered and subsequently illegal to distribute under the DMCA.

http://en.wikipedia.org/wiki/AACS_encryption_key_controversy



A "golden key" only makes sense to me if it is exchanged for transparency of when it is used.

Imagine a hardware device that can vend session keys to any device, but will not do so unless it can push a notification to a public server that publishes the identity of both the accessor (who must authenticate themselves) and the identity being backdoored.

If the argument is that law enforcement will only use this when going through proper channels, let this be enforced by forced transparency.

The hardware device would be designed to never let the underlying key leave the device. So for the key to fall into the wrong hands, the hardware device would have to physically fall into the wrong hands.

The hardware would therefore need to be housed in a secure location.

This is what I personally would require before I could ever support a golden key.


>A "golden key" only makes sense to me if it is exchanged for transparency of when it is used.

This is a very interesting point. Sadly it's just as impossible to guarantee transparency when using the key as it is to guarantee the security of the key :(


The idea of a secure golden key would be wonderful if you could always trust the keepers of that key to do the right thing. But you can't. The same applies to the idea of "you have nothing to fear if you have nothing to hide", which incorrectly assumes that what you perceive today as nothing to hide will always be perceived the same way by others that can view your communications.

The ironic thing is that eventually, those that the golden key is aimed at today will probably be the least hurt by it. The bad guys will take precautions and not trust the security of public services (they'll be using their own secure channels) while the good guys will continue to loose their privacy and liberty.


> How to resolve this? A police “back door” for all smartphones is undesirable — a back door can and will be exploited by bad guys, too. However, with all their wizardry, perhaps Apple and Google could invent a kind of secure golden key they would retain and use only when a court has approved a search warrant.

I don't get it. What's the difference between a "backdoor" and a "golden key" other than that one sounds less threatening than the other? It's seriously clear that WaPo simply has no idea what they're even talking about. Did they not have a Software Engineer on hand they could run this past?


Based on the thesis "perhaps Apple and Google could invent a kind of secure golden key” so that the good guys could get to it if necessary."

The bottom line is: Sadly, there's no way to tell who the "good guys" are.


That's a weak argument because there's a lot of people who aren't willing to consider the possibility that an assumed-good guy might actually be a bad guy, either now or at a date in the future, and may even mock you for suggesting it. The HN gestalt itself has varied opinions on this matter depending on the topic.

Much better is just to observe that any such key can't be kept from the bad guys, whoever they may be, and backdoored encryption over time tends to be no encryption at all. This is simply the truth. For all we know Snowden carried out some encryption keys, and if he didn't, for all we know Snowden II will.

Making the former argument puts you in a position of trying to argue uphill against what may be some very solid presuppositions about the goodness of the good guys; the second argument is undeniable, and very simple. (Of course some will managed to deny it anyhow; there's no such thing as a perfectly effective argument. But it's much stronger.)


I don't think we disagree. My point is very much in line with "any such key can't be kept from the bad guys, whoever they may be".

Even if you give it to only the "good guys" and somehow ensure that nobody else could get it, there's no way to know the "good guys" will always be "good". It all depends on context. If the NSA is stopping people from harming innocents, they (rightfully) think they're the good guys. If they are secretly spying on said innocents without their consent, people (rightfully) think they're the bad guys.


Or simply: nobody can be trusted not to abuse power. That's why we have transparency, checks and balances, etc.

I can believe that a lot of people have good intentions. But that doesn't mean their actions are always right.


With all the revelations in the last couple of years, I think it's pretty clear there are no good guys at all.

From governments, to corporations, to malls, to your neighbors - they're all going to dig dirt on you given half the chance.


Here's another argument against it: Market share.

Scenario 1: Apple creates a secure system without a backdoor. Everybody uses it.

Scenario 2: Apple creates a "secure" system with a golden key that lets US law enforcement in. Nobody with a clue outside the US uses it. Several European governments ban it. Apple (and US) market share declines around the world.


"OK, just spitballing here, but how about instead of leaving a back door open, we just don't lock one of the windows? Is that a good compromise for everyone?"


This threat tends to be underplayed by our intelligence community.

> Threat #2... Even if you trust the U.S. government to act in your best interest (say, by foiling terrorists), do you trust the Russian government? Do you trust the Chinese? If a door is open to one organization, it is open to all.


China's potential iphone market likely be bigger than US. Chinese gov can easily say: "Give us the golden key, or your phone is not getting into the China Cell market."


> perhaps Apple and Google could invent a kind of secure golden key they would retain and use only when a court has approved a search warrant

Even if the Washington Post has no idea how cryptography works, this is incredibly naive politically.

"A court", you say? From which country? Google and Apple do business worldwide. Can China's courts access my data? Iran's? If Apple employees have the key, they decide. And if they decide, any country with enough monetary leverage can get the key. "You can't do business in China unless you give us access."

(Not to mention the head-in-the-sand implication that the US government would only get data using due process.)

The only way this can work is if the tech firm cannot access the data.


Isn't there a 'Secure Golden Key' at the base of DNSSEC? I guess the damage would be somewhat less if that key was exposed, but still.


Golden Key security underlines DNSSEC and TLS. Fundamentally you are trusting 1 or very few agencies that they in fact have your security and best interests in mind.

This is normally when somebody starts talking using GPG to replace TLS and DNSSEC systems. But this is normally somebody who doesn't realize how graph theory or at least path finding works, so they are ignored.


It's not the same as a Golden Key because you're not handing over your private keys to somebody else, nor are you trusting them to store private information.

There's no substantive analogy to be made at all.

Also, there are quite decent alternatives to DNSSEC. In particular, DNSCurve: http://dnscurve.org/

PKI is admittedly a mess at the moment. News at 11. But that's irrelevant to arguments regarding Golden Keys, aka backdoors to your own, private information.


Too true. Now that you're thinking along these lines, look at some of the alternative name-resolution systems and, hopefully, find one that you want to contribute to.

We need at least one viable alternative to DNS. Competition.


The good guys? Who are the good guys? The integrity of which government departments do you trust?


> Your personal pics, videos... will all be leaked, a kind of secure golden shower

The humor writes itself


I pretty much sprayed my monitor with coffee when I read that in the article. Tremendous :-)


I think there are two sides to this, but for the most part neither side will admit the negative aspects of their stance. Any "golden key" backdoor that is implemented, no matter how emphatic the promises, will be abused. On the other hand, having truly inaccessible data, no matter how much it enables the our valid right to privacy, will also make communication between criminals, including terrorists, easier.


Ben Franklin (allegedly) said, "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety."

The old guy is seeming more and more prescient nowadays. Pretty clear that BF wouldn't have supported backdoors (especially not with the grim Orwellian doublespeak name "Golden Key.")


Although, it is entirely possible that what Ben Franklin actually meant was not what many believe[0]...

But who knows? I think it's a bit silly to presume what the Founding Fathers would have thought about modern concepts like cryptographic backdoors, when they[1] would have probably been far more appalled by all of the black people and women running around unattended than at the existence of a standing army or laws governing interstate commerce or whatever.

[0]http://www.lawfareblog.com/2011/07/what-ben-franklin-really-...

[1]generally speaking... i'm not implying all of the Founding Fathers supported slavery, obviously, just that their political and social priorities would not necessarily be what we might expect.


It's interesting how often the lessons of history need to be relearned. It doesn't matter that we can trust the government of today (to the degree we can, which isn't much, in my opinion), it's never wise to give governments ultimate power. Because it means that if the government then becomes untrustoworthy the results are much remedy the situation. Moreover, our little "free" bubble is not the entire world, the governments that most of the people of the world live under are not merely untrustworthy, in many cases they are exploitive or oppressive. Not the sorts of organizations you'd want to give the ability to snoop into everyone's business.

Many folks are blessed with the fortune of living lives without turmoil or great consequence, and for those folks it may seem perfectly normal to allow the state into their affairs. But we should not force the state to be in everyone's affairs, we should not chose to live in a police state out of some fool notion that there will be less crime or less terrorism there. That's not how it ever works out.


I keep having flashbacks to 1994... Clipper Chip saga.. anyhow, here's a good read.. looks the the debate hasn't changed too much. http://www.nytimes.com/1994/06/12/magazine/battle-of-the-cli...


I understand the rightful distaste for backdoors. However, it occurs to me that any encryption system that could get widespread use would need some kind of mechanism to recover data in the event of loss of password/hardware/authentification device etc. The more valuable data is the more important backups are. I think that the general public would be reluctant to use any complete encryption product that could lead to loss of personal data such as email in the event of common human error. We need ways to recover data, and if those methods exist they could always be used by law enforcment. Encryption has to exist within the world as it is (with comprimises), not how you may want it to be. A secure encrypted backup recovery standard may actually make users safer, because what we have now is so much more vulnerable.


I think the real point to take out of this is how important it is to have secure backups of your encryption key. Making sure that the encrypted data is backed up is common sense. Backing up the encryption key is just as important.

If I were to lose my encryption key, I would absolutely expect to lose my data. That's how encryption works. That's how encryption should work. If I can get my data back without the encryption key, someone else can too.


this is a little tangential but I continue to be astonished that dropbox has been able to continue operating without client side encryption, especially after the 2011 debacle


Users don't care about security or privacy. This has been demonstrated again and again, with Dropbox as a premier case in point.

By "care" I mean "change buying habits" or "plop down money." Talk is cheap. People will say they care about security or privacy but since they do not change their spending behavior these are just words. Where people spend money is the only thing that matters in terms of steering corporate behavior or product road maps.

What users care about is convenience, overall user experience (UX), and cost, in roughly that order.

Enterprises and governments do care a bit more about security, but they often don't understand it and make poor buying decisions in that space.

* Edit: A few tech-savvy or politically concerned users do care about security and privacy, but this is (1) a small category, and (2) generally a group that doesn't want to pay for things and thinks everything should be free. #2 is the most significant factor-- if you don't pay for things you don't exist.


That and the fact that they will inevitably lose their data when they mishandle or misplace their keys. In the rare case that they are the sophisticated type of user who will most likely (and even that likelihood could be debated) safely handle and retain their irrecoverable secrets, they're already solving their problem by doing client-side encryption on their own.

Maybe that's what you meant by convenience/user experience. Client side encryption with a nontrivial, irrecoverable key that must be remembered and kept secret by the end user is not a very usable system for the vast majority of people.


Tom Cross (security researcher, activist, etc) has been talking a lot about potential abuses of lawful intercept for a while, if you like this article and looking for something deeper his 2010 Blackhat paper is worth a read https://www.blackhat.com/presentations/bh-dc-10/Cross_Tom/Bl...


Keybase absolutely MUST implement a "we haven't received any FISO requests, etc." type of page on their website, one which can be taken down in the event of an illegal intrusion by the government via this mechanism, before it's too late. One cannot add a page stating that they have had such a request, so the former is the only option.


I'm not sure "Threat #4. It Protects You From the future" is quite accurate. It protects your form the near future where the time taken to break the encryption is long. However, Moore's law and the discovery of flaws in present encryption methods mean that eventually, even items for which the only key has beed destroyed will not be private.


> Moore's law

There are physical limitations on Moore's law. If you were to design a mathematically good/perfect n-bit cryptosystem that took 2^n tries to break, you can make some very strong guarantees. For example, even a perfectly efficient computer (1 electron flip per attempt) would (on average) take more energy than the sun will ever emit to break an ideal 256-bit cryptosystem.

> discovery of flaws in present encryption methods

This is the kicker. However, I'm hopeful. We have some really old cryptosystems that are still going 20-30 years later. As analytical techniques advance, I bet we'll find some even more rigorous constructions.


You'd better double that bit count; there's a generalised algorithm which lets you break symmetric encryption of N bits in 2^(N/2) time.

This probably still means that 256 bits is good enough. Of course, then there's RSA to worry about..


Are you thinking about the Birthday Paradox? Because a) that usually applies to hashes, and b) it applies when you're trying to find collisions by between two variable strings, not one fixed and one variable.

Otherwise, you may want to get on the phone with everybody using AES-128, et al and let them know. 2^64 would be rather trivial to crack these days.


I think Filligree could have been referring to Grover's algorithm (requires many qubits of quantum computing power, which we don't know how to build, but we do know the algorithm)


He's thinking of Grover's algorithm.

And yet, if it could be implemented in a fast way, it would bring AES-128 into the range of trivial breakability.


Yet we have to recognize that a single, strong password that only you know is not good enough. Given the processing power evolution, a password that would be cited to take 100 years to break by a strong computer, in 10 years it would take 3 and in 20 years it would take 35 days.

I hope by then we're thinking about which secure system to migrate to.


What we need is forced disclosure of interception by the Government. I propose the following:

- Anything the Government intercepts must be disclosed to the parties intercepted after some period of time.

- After three years, longer than the length of most criminal investigations, the Government must disclose intercepts, except that the Government may choose to withhold information for no more than 25% of intercepts. This allows for long-running intelligence operations and criminal investigations, while forcing disclosure of excess interception.

- Every three years thereafter, the Government must disclose all but 25% of the withheld disclosures. So, over time, more intercepts are disclosed, until all are.

- Records of all interceptions, listing the parties involved and all other pertinent data but not the content of the interception, made within the US or of persons or entities within the US, must be reported within 7 days to the office of the Attorney General of the United States, the Administrative Office of the U.S. Courts, and the Librarian of Congress, there to be held permanently and securely. This puts a copy of the data in the hands of each branch of government.


> the Government may choose to withhold information for no more than 25% of intercepts.

Be careful of unintentional incentivization. The govn't could just intercept 4 times more than it really wants.


What? No. We need to prevent interception in the first place.


I would perhaps consider a system where N independent keepers of keys need to agree to a privacy override. Balance of power and stuff. It is possible from the crypto point of view but it doesn't really matter because...

...first we'd have to find at least one worthy keeper and it's a social problem, not a technical one.


Good observation on "golden key" == "backdoor". That reminded me George Carlin's "Euphemisms":

https://www.youtube.com/watch?v=vuEQixrBKCc (May have strong language)


I don't think it was a euphemism at all! I think that the author of the article had no sense that "secure golden key" is just an implementation detail of "back door". Which is the sort of misunderstanding of "how stuff actually works" that becomes dangerous when influencing and defining policy.

There is a good debate to be had here, but it really shouldn't contain the word "wizardry" anywhere.


It might be an honest mistake today, but tomorrow it could be picked up as a word that would let some legislation go trough without alerting general public. I think, it's important for the security community to make sure that the "golden key" won't become an euphemism in the future. This article is a great start.


Call me crazy but the first time I saw the article I thought I was reading a NSA press release. I'm very sceptical on the source of the this article, did the author (and the newspaper) came with the idea alone or he driven by third parties.


The "Secure Golden Key" is as secure for the internet as the "Wonderful Beating Stick" is "wonderful" for a kid. Just because we put "wonderful" in the name of a beating stick does not make it so.


The Washington Post is stuck reliving the crypto wars. Just close the tab, we can always let them back in when they have catched up to the 21th century.

Just a shame they managed to trick Orin Kerr into that mess they call journalism.


Has anyone ever tried to pass a law like this for the physical world? Something like: all safes and houses must have locks with a universal key that only the police know.


"Good guys". Apple. Google? Whuy? Why do adults have the moral understanding of a 10 year old.


Who decides who the "good guys" are?


This is just the key escrow debate of the 90s all over again.


It may be the same debate, but that doesn't mean the war is over, if that's what you're saying.


the clipper chip is back on the radar


Seriously, epic.

I hope this stays at the top for 3 days.


I don't understand why this is being posted here.


Started reading. Sounded interesting. Saw picture of He-Man. Closed window.


That was not very relevant or meaningful.


It's kind of hard to take the article seriously... This topic - government held keys for encryption services - is an old debate. If I remember, some countries even implemented key escrow. The He-Man picture lets you filter out the article pretty quickly.


Every heard of the concept of "a picture is worth a thousand words"?

Here's my take on it and why I think you dismissed it too quickly (but of course you are free to do what you want):

He-Man is a cultural theme that everyone understands and can relate you. If you watched it as a child, you know what he represents.

So the author is making the point that a "magic tool that only the righteous may wield" is about as realistic as a saturday morning cartoon. He does this in one line and with one image, instead of in many more lines of text that would have sidetracked his main points.


If it's an old debate, why are we still having it? http://www.businessweek.com/news/2014-09-30/u-dot-s-dot-seek...


The debate over Key Escrow was shut down by Clinton through unilateral executive action.

The political viability and constitutionality of the myriad issues mandatory escrow systems would give rise to were never tested. Furthermore, by the time key escrow was dropped the existing CALEA legislation was already a huge win for the FBI and DoJ. And given that it's 2014 and opportunistic/pervasive encryption is still more a pipe dream than a reality, the FBI and DoJ didn't care to expend more political capital pressing their case.

But now that we're potentially on the cusp of more pervasive encryption, we should all be prepared for some major PR battles going forward.


I'm pretty sure everyone who isn't grossly incompetence realizes that it's a bad idea, too. :)


I cringe at image macros and other internet memes, but I'll tolerate any marginally original silliness.


I know this may be a minority opinion, but I'm personally much less likely to share articles with my colleagues when they have cartoon images in them.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: