"Ah, but that's the thing: You can't build a "back door" that only the good guys can walk through. Encryption protects against cybercriminals, industrial competitors, the Chinese secret police and the FBI. You're either vulnerable to eavesdropping by any of them, or you're secure from eavesdropping from all of them.
Back-door access built for the good guys is routinely used by the bad guys. In 2005, some unknown group surreptitiously used the lawful-intercept capabilities built into the Greek cell phone system. The same thing happened in Italy in 2006.
In 2010, Chinese hackers subverted an intercept system Google had put into Gmail to comply with U.S. government surveillance requests. Back doors in our cell phone system are currently being exploited by the FBI and unknown others."
And that assumes that FBI will continue to remain "the good guys" from now on. Looking at the institution's past deeds I'm not convinced that will be the case: http://en.wikipedia.org/wiki/Fred_Hampton#Raid_and_assassina... and http://en.wikipedia.org/wiki/Martin_Luther_King,_Jr.#FBI_sur... , just to name a few.
The examples you gave and several others, including instances of using agents provocateur to entrap people who only came to their attention because of lawful and peaceful anti-government activities.
There have been numerous cases of informants working at the behest of the federal government being involved in some highly objectionable activities and direct involvement of federal agencies in the suppression of domestic dissidents.
Your carrier has access to every URL you visit (http or https) and the entire packets sent to/from them (if http), regardless of the password on your phone.
Those Carriers have been MORE THAN eager to bend over backwards & give up the farm to the feds. Just check Verizon, AT&T's history with NSA/FBI/etc.
So, YES, the article is golden, Schneir is golden, but the fact that weaker links exists should still install vigilance & outrage amongst civil libertarians / freedom lovers.
And, even if there are weaker links in the chain, we should still applaud & support strengthening any/all links, while still vigilant/aware of where the weakest link remains.
All good points, & my only point was that while we strengthen each link in the chain, we can't assume we're secure while weaker links exist.
The hostname is a special case, as it's present in the certificate presented by the host, as well as in the client hello if SNI is in use. (Both of these are unencrypted, as they're sent in the process of negotiating the secure channel.)
Exploiting Lawful Intercept to Wiretap the Internet
The DUAL_EC_DRBG is a great example of how "golden keys" can be implemented securely, because the underlying problem is the same one that a lot of elliptic curve cryptosystems rely on.
Only the NSA could predict the output of the generator with the same level of security as ECDH or ECDSA.
I think it was Schneier who described encryption algorithms as being like a single fence post which is a thousand miles high. The vulnerability is not the encryption algorithm. The attacker is not going to break the encryption, the attacker is going to get access to the key.
The ways attackers might do that have very little to do with cryptography. Espionage, corruption, social engineering, bureaucratic incompetence, completely unrelated vulnerabilities in government networks, etc.
People are repulsed when you call it a back door or a "golden key," but "secure golden keys" are simply key escrows. Escrows are one of the first things you learn to build after asymmetric crypto.
Escrows are well understood, secure, and generally regarded as a good idea. However, "back doors" are scary, sordid, and insecure?
The encryption is using the usual cryptographic primitives. The problem is that the encryption is not the problem.
> Escrows are well understood, secure, and generally regarded as a good idea.
They are not. Because they're the same thing as back doors.
Using key escrow as a method of e.g. password recovery is a security vulnerability. And at least then you're choosing who you trust and the trusted party doesn't have a nuclear neon target painted on it because the keys in escrow are only for specific parties rather than everybody everywhere.
That's really the main issue. It is manifestly unwise to create a system which, if broken, yields the keys to everything. Because it makes the ROI of breaking it so enormous that the quantity and strength of the attackers you attract will overwhelm any imperfect system, and all systems are imperfect.
I have no idea what you mean by "security vulnerability" in this case. Key escrows are the primary way asymmetric crypto is used in practice. An escrow recovers the symmetric key a message was encrypted with and you proceed as normal.
> the trusted party doesn't have a nuclear neon target painted on it
By that logic, government agencies should operate exclusively by typewriter. They don't because crypto primitives are designed to be secure against unrealistically strong adversaries. No crypto paper has ever said, "Our construction is secure in the standard model as long as the ROI is less than epsilon."
You can go down the path of "what if there's another Snowden, and it's a snow day in Kentucky...," but you aren't doing cryptography anymore--you're just developing mild paranoia.
You're using an unusually broad definition of what "key escrow" means.
And your definition is covering the scenarios that nobody would call a back door, so I'm not sure what point you're trying to make.
Giving a third party access to your private information is a security vulnerability unless you can trust the third party, and "you can trust the government" is a statement contrary to evidence.
> By that logic, government agencies should operate exclusively by typewriter. They don't because crypto primitives are designed to be secure against unrealistically strong adversaries. No crypto paper has ever said, "Our construction is secure in the standard model as long as the ROI is less than epsilon."
Once again, the crypto is not the issue. Nobody is expecting the attacker to solve the discrete logarithm problem. But breaking the cryptosystem is not the only way to obtain the encryption keys. You have a serious security vulnerability if the expected value of the attacker obtaining the secret key is larger than the amount of money required to bribe the relevant government employee(s). Or if the servers the government keeps the keys on are vulnerable to heartbleed/shellshock/whatever at any point after attackers learn of the vulnerability, regardless of the strength of the escrow cryptosystem.
> You can go down the path of "what if there's another Snowden, and it's a snow day in Kentucky...," but you aren't doing cryptography anymore--you're just developing mild paranoia.
It's not doing cryptography at all. It isn't a cryptographic problem. The strength of the cryptography makes little difference because the cryptography is not the weakest link.
The problem is that government agencies have demonstrated very clearly that they will use such a key as often and as liberally as they can get away with. Every time the key is brought out and used, it becomes vulnerable to interception or disclosure to the point where pretending that it is sure to be safe is just completely out of touch with reality.
I think that's the main point when people disagree about this kind of thing. One side says "but theoretically our fort knox vault solution would be secure" and the other side says "What I just heard you say is 'If I sprinkle magic pixie dust around this intentional vulnerability and ignore it then it would be secure' and that's ridiculous, a group of humans would not almost certainly not act securely around that vault because they never have".
Of course the "other side" is right, but people don't think that way naturally so this kind of proposal/argument keeps coming up.
Key escrow is a back door.
Schneier has generally moved focus from designing and breaking cryptography (although he was a key contributor to the Skein SHA3 candidate) to the bigger aspects of risk. And trusting the government or any third party is risk.
You'll like his blog.
Only the NSA could generate the key initially. After that, who knows who will get access to it? You still have the same fundamental problem as with all backdoors, you've just eliminated one that most of them have (the fact that they're crap enough that you don't even need the back-door key to access them).
"The NSA" isn't a person. If "the NSA" has access to a backdoor that means there are possibly hundreds of people who have access to that backdoor, every single one being a mere human being.
It's not a secure golden key in a glass box with the label "in case of emergency break glass". It's a note on a shared whiteboard in an open plan office.
Predicting the generator's output reduces to solving the discrete logarithm on elliptic curves, which is incredibly hard unless you chose the private key.
Just like in any asymmetric crypto scheme, given only the public key, it's very difficult to find the private key.
(Actually doing it requires knowledge of how the generator works. I promise you it is easy given the private key, but you can go into its internals if you don't believe me.)
> Isn't it just trivial for someone to "recover" your data by pretent to be you by court order, NSL or other means?
Without a "golden key," somebody would have to ask you. With the "golden key," somebody would have to ask the organization in charge OR you.
Edward Snowden proved how this is far from secure. There are people from the NSA, who if they wanted to spy on a girlfriend, could walk out with those keys.
The underlying problem is that you're no longer relying on technological security and mathematical certainty, but rather on human security and the capriciousness of people who earn $65,000/year to keep secrets.
There's no security at all there.
Your assumption that ""golden keys" can be implemented securely" is wrong. The only secure golden keys are those not used. Which means, in order for such keys to be really secure they mustn't be created.
Isn't is true that today, I can easily clone iPhone's data by declaring I loss my phone and need to buy a new one and recover everything from iCloud?
Isn't it just trivial for someone to "recover" your data by pretent to be you by court order, NSL or other means?
Steve's thought on how safe the data in cloud storage is by asking a question: "How easy is it for me to recover the data from your cloud service if I loss/forgot my password?"
On iDevice, it is NOT that hard for the phone owner to do it. It is trivial for any government agencies fake the owner's phone #, txt Msg, email to recover or reset the iCould password.
How do we know, and be sure that this is true?
Implementing the "back door" might be a difficult problem, but stating a priori that it's an intractable problem is just lazy thinking.
1) Have the device manufacturer, FBI and Federal Court system all create their own public/private key pairs. FBI and the courts publish their public keys; the device manufacturer includes their public key, the FBI's and the court's on each device.
2) Each of these organizations stores their private key in a their own central secure facility. The key is only to exist in airgapped systems, further encrypted using a secure multi-party scheme so that multiple people are needed any time the private keys are accessed. Include personnel vetting, logging, video cameras, secure vaults, etc. to add further security.
3) When the user selects their password, the storage media is encrypted using that password, and the password is stored in a separate location on the device encrypted using the FBI's public key, the device manufacturer's public key and finally the courts' public key.
4) When law enforcement needs to decrypt the phone, they request a warrant from the court. When the warrant is approved, they send a copy of the encrypted password to the court's secure facility. The court decrypts it using their private key and hands it back.
5) The FBI (or the court, whichever) brings a copy of the warrant and the partially decrypted password to the device manufacturer. If the device manufacturer wishes to contest the warrant, they go through the normal appeals process. If not, they decrypt again using their private key and hands the resulting encrypting password over to the FBI.
6) The FBI takes the resulting encrypted password back to their own secure facility and decrypts it once more using their own private key, giving them the plaintext password. They now have access to the device, and could only have done so with a valid warrant and further given a third party trusted by the suspect (the device manufacturer) a further say in whether or not to appeal the warrant.
There's my quick solution. Feel free to further the discussion by critiquing or improving it rather than downvoting to oblivion.
Rather than critiquing your specific solution, allow me to explain why no solution is possible in general.
0. Because this is Apple, people are assuming Apple's model, but it is not acceptable to enshrine that model into legislation for all of the reasons I could go into if they don't seem sufficiently obvious to anyone. And if Cyanogen on a Nexus device or Debian on a PC doesn't support the backdoor (or does but the device owner has the power to remove it), it is clearly unreasonable to hold Apple to a different standard, and in any event would be ineffective because the "criminals" (or anyone desiring privacy) could just use non-Apple devices.
1. Even if you are willing to pass a law against general purpose computing, if you have a backdoor that can't be updated you have both a security vulnerability (because flaws can't be fixed) and an ineffective backdoor (because if someone e.g. loses their keys there is nothing you can do about it).
2. If you have a backdoor that can be updated, whatever process is used for updating the backdoor is now a security vulnerability. Having the updates signed by the Director of the FBI and the Pope provides no security because the Director of the FBI and the Pope have neither the expertise nor the time to understand what it is they're signing, and a rubber stamp enforced by digital signatures is still a rubber stamp.
Also, what happens once a key is leaked? Are we supposed to throw away every phone and every computer?
Also, if you allow firmware to reflash its keys, then you have a mechanism that again can be subverted (as well as what you point out). I think we've seen it with games that that sort of DRM doesn't work in the long run -- it's routinely cracked, but manufacturers don't care too much as long as it allows them time-windows long enough to make bank. Making it the basis of all data-handling in the land doesn't seem particularly smart.
Saying "it's not possible to build a secure backdoor" is a cop-out.
The truth is, we can build systems so that your data can be decrypted with either your key, or a master key. And we can build systems where the master key requires multiple coordinated parties.
Such systems can be used to implement secure backdoors, but only if we can trust those parties with the master keys to do their job properly.
- Proper key security so their keys don't get exposed
- Proper access controls so that the keys are only used for the purpose for which they're intended.
The problem is that we know with complete confidence that all of the parties involved in that process will fail at one or both of those hurdles.
This isn't a technical problem. "Secure golden keys" are technically feasible, and not even particularly hard. It's simply an issue that there is no one that you ought to trust with such a key.
Your alternative is basically saying "it's theoretically possible, but in practice impossible". If so, the theory is incomplete: it fails to account for the human factor.
2) If you have jurisdiction over someone, you can force them to decrypt something based on a warrant, so it isn't even desireable for a backdoor to exist. Rather than hoping they follow the rules in their ultra-secret room that normal people will never see, they can work through the court system, publicly.
3) There's a concept called "attack surface." A backdoor increases that, by increasing the number of secrets.
4) The article goes into detail over the real-world failures of systems like these.
5) The device manufacturer does not necessarily have our interests at heart and should not be the party contesting warrants in any reasonable system.
6) Even if the encryption scheme was sound, human error trumps all. We're introducing all sorts of elements, each of which is a point of failure, and none of which has interests aligned with those of the end user (us). OTP is provably secure, you know. It's the key management that gets you.
So we haven't demonstrated the need for this vs. compelling someone via warrant to decrypt and other means. We have good reason to believe that illegal, dragnet surveillance is the real reason for opposition. Even if you believe that our own government is (and always will be) perfectly virtuous with that power, there are many foreign governments with similar capabilities. Not sending them any traffic? Think again (BGP hijacks). And we have lots of experience showing that such systems leave people vulnerable to bad guys.
So this asks us to take on big risks for bad reasons and we know that. There just aren't any good reasons not to dismiss and oppose such systems outright and there are many good reasons to be suspicious of the motives of anyone telling us otherwise.
I blacklisted WaPo some time ago as not being a credible news source. This editorial only gives me further reason to keep them blacklisted.
A lot of people in IT seem to be blind to this. Encryption is no different from a very good hiding spot. If the authorities know it exists and you know how to access the information within it, there are ways to get you to disclose that information. It's not like they will just let you walk away if you simply say "no".
And that's just for the normal "criminals" argument. For "terrorists" there are even more ways to get them to disclose information. Rubber-hose cryptanalysis is a thing.
We've seen law enforcement abuse every form of power they've been handed (see National Security Letters for one example), so I don't trust them to Do What's Right. Ever.
I think that as N grows, the probability that q >= 1 approaches 1. It strikes me as a self-evident feature of human nature.
Russian brides I suppose, in the context of this discussion?
Unlike me stealing your data, there are very legitimate reasons for law enforcement to have access to devices to carry out investigations. The warrant requirement is a check on abuse. If there's an issue regarding abuse of power (e.g.: bribing employees, intimidating judges, overly broad laws), it would be better for everyone to work to address the real problems than taking away means for law enforcement to investigate actual criminals.
Correction: it could work, in theory, if US companies were willing to sacrifice massive amounts of revenue for no reward.
The making of phones is going to get commoditized - everyone could make a phone in a few years, from parts and using open source software. Then we will have choices and never have to use spyware again.
The gov't will be able to get the encrypted input for the device mfr, and the output decrypted from that. Given this data, they will have a huge headstart in backing out a solution to what the mfr's private key is.
Even if this weren't so, it makes every major vendor a very cost-effective target for the government. They'll devote resources to either breaking their private key (as I described above), or just using straightforward human engineering techniques. This apparent extra layer of security will not last long in practice.
EDIT - in my 2nd para above, it assumes that a mfr has a single private key, which is probably not the case. But generating a unique key pair for every discrete device is both an expense problem and introduces difficulties with managing the set of keys, which makes its security even more difficult.
A backdoor is a backdoor is a backdoor: an entrance on which you have no visibility nor control. And this is exactly what a "golden key" would be, regardless of how it's implemented.
> how to reduce the risk of 3rd parties [...] gaining access.
Don't you see the contradiction there? Our systems are already full of holes, so let's create some more...?
You also assume I'm a US citizen, which I am not. The minute the US mandate backdoors to manufacturers, all governments will want in, including some very nasty customers. So what do you do next, have separate keys for each government and country-based firmware, i.e. even more holes? It's a slippery slope.
"The thing to look at" is how to remove the capability for third parties to access your data, regardless of who they are. Say that tomorrow I'll wake up in Nazi Sweden (not terribly unlikely, seeing current Euro trends), I'd rather not have brownshirts with keys to all my data - would you?
With regards to other countries, manufacturers are already required to comply with all laws in any country they operate in. I don't think Americans should have to make a decision as to whether or not their own law enforcement should have the ability to decrypt phones based on what Chinese police might do in their own country.
If you're worried about brownshirts taking over your government, there's nothing preventing you from encrypting the data yourself - nothing prevented you from doing it before. There are legitimate reasons for law enforcement to search a device after being issued a valid warrant; same as your house, place of business, safe deposit boxes, etc. Strong encryption by default on hundreds of millions of devices on which people conduct much of their daily business is something new. Locks and safes will slow down an investigation, but never to the point of bringing it to a halt. Strong encryption will. Given how central these devices have become in our lives, I don't know that it makes much sense to prevent police from searching them when they are legitimately investigating an actual crime.
(1) The Mass. Supreme Court ruling/precedent can compel a suspect, with proper legal protocol, to decrypt a device and is NOT an infringement on one's constitutional rights. This makes the entire argument of "responsible disclosure" null and void.
(2) You seeing security holes as "forgot password functionality" shows how uninformed and inexperienced you are. When you start seeing everything from CPU fans (measuring audible signals to detect an encryption key) to the latest un-patchable USB bug (arbitrary code execution via invisible controller-chip firmware alterations) as attack vectors you will begin to realize just how wide the potential 'surface' is. Care to venture what could happen if someone were to, say, reverse engineer the baseband and had access to all firmware functions of a device within an environment where the decryption key was being used? A "golden key" is simply creating the largest historical hacking bounty, and a fool would think such keys could be kept secure indefinitely.
When you are admittedly an amateur and experts are telling you this is a bad idea, you should listen. Whether human or machine history has PROVEN that there is no "perfect system." Introducing unnecessary holes to accommodate bureaucratic pedantry is wholly unnecessary, and I would venture to say, idiocy.
(2) I'm not even going to bother responding to this one since you apparently couldn't be bothered to phrase your argument without an ad-hominem. If you care (which I doubt), I've already responded to a similar argument elsewhere on the thread.
It's not, of course, literally in the Constitution that you cannot encrypt things and then refuse to decrypt them for law enforcement. Nor is it in the Constitution that people selling encryption devices must include backdoors.
What words in the Constitution do you see as pertaining here?
Of course you can encrypt things yourself, and I would imagine you could probably refuse to hand over the password under your 5th Amendment rights. Nothing's changed there.
I think the issue here is that someone else (Apple) has gone and set up an encrypted environment for you in such a way that actively prevents law enforcement from carrying out legal investigations with a warrant on an incredibly popular series of devices that they were capable of searching before. Let's face it - the number of people who are buying iPhones because they are now encrypted is miniscule. If you really wanted to encrypt all of your data, optional full disk encryption already existed on other phones. The problem the police have with it is that it's now set up by default, they can no longer access it when they do have a valid warrant and this extends to everyone instead of just the handful of criminals that would take extra steps to encrypt their data. John Q. Criminal didn't choose to encrypt his phone, Apple made that decision for him.
Not at all true. The fifth amendment doesn't mean the court can't serve you with a subpoena for your device's unencrypted contents and imprison you until you provide it, for life if necessary.
Why give law enforcement the ability to arbitrarily circumvent these protections? If the court says, "No, the defendant doesn't have to give it up." Why would you want to have a mechanism in place for the prosecutor to say, "Eff the court, we're taking that information anyway."
I'm no lawyer, but I suspect any evidence gathered that way would be thrown out anyway.
Likewise, you've always been able to encrypt your phone/computer/etc., and for the most part the police aren't able to decrypt it so long as you encrypt it properly. Nothing's changed in that respect. It's not a common enough phenomenon that an inability to decrypt some laptops will affect most cases.
The iPhone issue is different, though - it's not the user choosing to encrypt the device; it's Apple choosing to encrypt the device on the user's behalf. This won't affect the search of just a few suspects' property. Due to the popularity of the iPhone, it will likely affect conducting searches for a significant percentage of cases that wouldn't have been a problem before the last update to iOS. The police already needed a warrant to gather any evidence off of your cell phone, anyway; now Apple has gone and effectively stated that the warrant doesn't matter, the police don't have the right to search it to begin with.
That is the key point here: The moment you introduce a backdoor you have painted a target on the parties that needs to hold those keys big enough to be seen from space.
Meanwhile, with compartmentalised keys, some users accounts get compromised all the time, but no single leak is sufficient to get everyone.
There are many details I avoided in the essay that people will get sidetracked on. Such as whether Apple is actually doing it right or not. This is discussed and speculated on elsewhere. And right now I'm far more worried about a world in which they're legally prevented from trying. When the FBI starts talking about our kids getting kidnapped, I think bills start getting drafted.
There are tangents I think HN would've cared about, but which would've been a distraction. For related controversy, Keybase - which I work on - lets users symmetrically encrypt their asymmetric keys, and store that data on our servers. We also let you do crypto in JS. Many hate this! Many love it! But I want to be legally allowed to write this kind of software and release it without a backdoor.
It's hard enough when we all fight about implementation details and security vs. convenience. It'll be a real crap show when we know for certain that security has lost. I'll go program video games or something. Or just play them.
[post-post-edit[[The funny thing is, I wrote this before reading the last portion of your post. No joke. I'm pretty sure the more our privacy is invaded, the more singular an entity a collective mind becomes. (half joke).]]]
It still doesn't fix that I feel like we are one step away from the complete loss of thought privacy. I'm 28. I grew up essentially connected to a computer. Half of my personality exists always, in a computer. My entire social life is on the computer, for the entire history of my history. The image I see my existence as is literally half mechanical (I study a lot of math and computer science and code a lot) and half physically organic, and potentially half abstract mental 'thought idea stuff' (extra halves for pointing out that the world is a wibbly, world place).
Should the government have been allowed in the past, to have meticulously documented my childhood, my adolescence, my adulthood? Should they have access to my baby photographs? To my first boyfriend? To our first conversations?
How does that affect who I choose (or chose) to grow into? Is every step I take a step out of a fear of not conforming to both the present ideal of 'lawful person' but also to all worlds of the potential future ideal? When did I become aware of this, as a child? When did I decide I was cognizant enough to have made this decision? Was it when I turned 18, or was it when technology reached a certain age for the entire population?
Do I have to think about every possible way that people can perceive my actions, in order to make sure I stay the way I see myself - which is fundamentally on the basis that I am a good person?
This is the money quote for me. No matter how hard I try, I can't seem to get this through to people: No, I'm not a criminal now. I don't plan to become a criminal in the future. I control my own actions, but I don't control the how my actions are perceived by other people.
Should I be arrested because some combination of my purchases today and words written when I was 17 point to me committing a terrorist act? More to the point, should I have to worry about what someone is going to think if a computer flags my metadata as "worth investigation" and they start trawling through the entirety of my online life? What effect does that have on general conversation on the internet?
I think the thing I'm most afraid of is how do I explain this to my kids? In 10 years when they start forging their own life online, how do I sit down and explain that everything they do is monitored, and they need to be careful about everything they say and do on the internet? How do I explain that it didn't used to be like that, but we failed to stop it? It sounds melodramatic to type it out, but we've had an unprecedented amount of freedom to communicate with people around the world, and now that's being taken away.
No we have not. We thought we did, but authorities knew better. Remember the cipherpunk crackdown, the crypto wars, NSA tampering with standards etc. They were watching us quietly enough.
The real "freedom" existed for a very tiny period between email/news/irc existing and them registering on The Man's radar, which happened fairly quickly (and no thanks to pop media like War Games for popularizing the idea of hordes of teenagers intent on vandalizing essential network infrastructure). The standards developed to respond (PGP, SSL etc) took 15 years to actually get traction (and I bet you're still not using PGP), and as soon as they hit the mainstream, they were attacked and subverted from inside the organizations trying to implement them on a large scale.
The freedom we thought we had was a myth. I think this is an important concept to keep in mind, going in this inevitable political negotiation with authorities. They lied when they told us we were free, and they will lie when they'll tell us that "golden" keys will only be used for good. Like you don't talk to the police without a lawyer, you shouldn't hand your encryption to them without a Schneier in the room.
Apple will become the phone of choice for the pedophile.
What happens when they blame strong cryptography for the next gut-wrenching tragedy involving innocent kids and a bombing / school shooting / kidnapping / pedophile?
Your arguments will be swept away by a tsunami of grief, hatred and blind patriotism. Is it possible to make an equally emotional argument in favor of the freedom to be secure?
Secure information deserves the same prestige and protection as the Four Freedoms.
If a hacker gets access to this golden key, they can use data gleaned from your child's phone to stalk or hurt them.
Also, I agree that it's shitty, but sometimes you can only fight fear tactics with scarier fear tactics.
It seems to me, politicians and other people have long sought for the most emotional arguments to wipe away the wings of freedom -- and thus making the victims of pedophiles, bombing and kidnapping victims again!
(BTW: In Germany, pedophiles where also used as argument for internet censorship. Organisations of those victims protested against these attempts, because they clearly saw, that this legislation did not help (and where never intended to) those victims or help to prevent anything.)
The topic changes but the story never does. We still have rock music, teenagers, acid, dungeons and dragons, hippies, video games, atheism, firearms, meth, Islam, gays, texting while driving, basically all the good things in life, and we'll always have thousands of things in the future that are not cool with the conformist old people and will be blamed for everything bad whenever possible, and aside from a lot of journalist click bait nobody will really care.
It's even more of a whitewashed canard than those other things you listed.
It's frustrating that TPTB don't actually care about these cases at all, apart from their political potential. Amber alerts are usually issued in response to non-custodial parent custody disagreements, which phenomenon is 95% driven by divorce lawyers, and involves very little risk that isn't created by the police themselves. In the rare actual stranger-danger case, "authorities" routinely allow multiple hours to pass between the kidnapping and issuing the alert. Of course by that time the child is beyond help.
Also, the government spends quite a lot of effort on the investigation and prosecution of child pornography, as documented here, for example: http://www.justice.gov/usao/waw/press/categories/childporn.h... To what degree this should be a factor in encryption policy is open to debate, but your charge that 'TPTB don't actually care about these cases at all' is baseless and does a disservice to the people who investigate such crimes.
Pedophiles (or more specifically: child pornographers) routinely use strong cryptography and effectively nobody cares.
I think that I'm more than capable of detecting most forms of propaganda, but I hold no illusions as to my ability to judge _everything_ that comes my way in a fully informed manner.
I can look back and see cases where too little information, bad information, or even just a misunderstanding lead me to believe in things that just weren't true. And I'm pretty sure I'm not the only one.
Do the math.
Shoes = Evil.
This makes it far easier to send to our non-technical friends, who struggle to understand what this is all about.
Unfortunately, it always seems like these kinds of tricky technical issues are communicated by the mainstream media using oversimplified, easy-to-understand language (often, in support of government entities) and, on the other end of the spectrum, us nerds end up talking about it, by default, in unapproachable language and format.
My guess is that the vast majority of the general public trust Google to keep their data secure more then they trust themselves to remember a password.
The French people, presumably, trusted the French govt. in 1936 when they filled in their census data.
Big Data is not a modern phenomena. It was invented in the 1930s by IBM and the Hollerith machines. The success of tabulating 40+ million Prussians was repeated across Europe.
The French census data was bravely protected by prevaricating French administrative staff who had realised the consequences but delay turned into eventual capitulation.
You cannot trust your future government, it could be anybody, with any agenda.
The two issues I see are:
1) The bad guys might still be able to get the key (as the article discusses)
2) We don't know who the 'good guys' are. It's very comforting to think that there's some mysterious superhero who will only fight for good but this ignores the large issue: no one can really agree on what "good" is.
This is the same key escrow battle we had when cryptography was first propagated: "Oh no, it actually works, and we can't trivially break it, it'll be the end of the world!". Only this time, the battle is "Oh no, it actually works for people who aren't technically savvy, it'll be the end of the world!".
The answer is the same now as it was then: no, go away, we're building solutions that work and you're asking us to make them broken.
Still, if we start arguing that maybe your government is the bad guy, certainly we won't convince the government. Also we won't convince the people who blindly trust the government.
Therefore, it might actually be most productive to avoid point number 2, as valid as it is, and take as premise that the government & Apple are both currently good guys. And then still convince a secure golden key is a bad idea.
With this in mind, the closest I dared to get was the discussion of the future. Governments do change. Whoever has access to your golden key might be a bad guy tomorrow, or invaded by a bad guy.
Btw, I even took out a whole section about questionably lawful warrants before publishing. For this reason.
There are only some guys, who try to be good, but so many fail.
It's very depressing to see constant regurgitation of cops-and-robbers morality.
The FBI has deep historic ties to the Pinkertons, and has historically spent at least half of its resources (conservatively) on anti-radicalism. The DEA is so deeply interwound with the CIA that half of the drug dealers in the world are protected and immune.
Local police forces are the only entity that is serious about "fighting crime," and their trustworthiness is inconsistent from region to region, to say the least. When you consider that the largest, most economically damaging crimes are being committed by banks and hedge funds, we can basically conclude that society is preyed upon by forces that have no counter.
Instead, we're arguing over a fantasy that somehow we need the police and FBI to protect us, thereby meaning we have to give up whatever meager protections we have against them.
I'm no Apple fan. In fact I'm still incredulous that this really means what they say it does -- that iDevices are secure by default. But if it's actually true, then freaking HOORAY FOR APPLE. Enjoy it while it lasts.
we can basically conclude that society is preyed upon by forces that have no counter.
Everything is easy if you can leave figuring out the intractable problems to someone else. When you actually sit and think about this for a few minutes the problems quickly become apparent.
The closest real world system to this is DRM on games consoles (e.g. XBox 360). They use secret "Golden Keys" to protect authoring but still need to allow users access to that same information.
However the way DRM worked in that context was a combination of hardware (e.g. roll-back proof) and tieing the whole thing into a mandatory update system (i.e. no games without updates).
That DRM actually got cracked several times, but because updates were required they could just ignore it and move on. The problem with the Golden Key suggestion here is that once your data is compromise it is game over forever. There's no do-overs as there is with DRM systems.
Most definitely. And even the way it was proposed at the end of the original WaPo editorial was very nonchalant:
> However, with all their wizardry, perhaps Apple and Google could invent a kind of secure golden key they would retain and use only when a court has approved a search warrant.
Let's all be sure to remind the "non-technical" public that there is no wizardry involved here. It's logic, mathematics, and science and some things are possible and some things just are not. Encryption in this sense is very black and white: either your data can be decrypted by a 3rd party or it can't. And if it can be decrypted by one 3rd party, you must assume it can be decrypted by any 3rd party.
For the game consoles there aren't really feasible do-overs either if the keys at ring 0(the earliest security event) are revealed. As in, the only path to recovery is to physically replace every game console out there.
>>The effect of this is essentially the same that the metldr key release had: all existing and future firmwares can be decrypted, except Sony no longer has the lv0 trick up their sleeve. What this means is that there is no way for Sony to wrap future firmware to hide it from anyone, because old PS3s must be able to use all future firmware (assuming Sony doesn’t just decide to brick them all…), and those old PS3s now have no remaining seeds of security that aren’t known.
Regarding the Clipper Chip, there is an observation that I haven't seen talked about in these discussions. There is an important difference in the NSA's tactics with their Clipper Chip proposal and their more recent activities: while technically and socially flawed, the key-escrow plan at least attempts to respect the constitution.
By keeping the keys at a 3rd party, in theory the NSA was limited to what they could ask for. Automatic rubber stamped warrants may allow them access to a lot of data, but it (again, in theory) would have been tiny in comparison to their current dragnets.
At first I took this as implying a change in the goals or leadership at the agency; a lot can change in a decade. But then we were introduced to BULLRUN and how it may have targeted IPSEC. Given that the IPSEC working group was started in 1995, this suggests the NSA may have been trying to subvert opportunistic crypto during the Clipper Chip imbroglio.
While it is speculation, this could raise the possibility that the Clipper Chip was just a distraction, while NSA's eye's were on the larger goal was the crypto standards themselves. Of course, any proof, if any, of this is buried deep in the NSA and not likely to see the light of day anytime soon.
Well, we know that an NSA contractor on the IETF removed opportunistic crypto from the PPP protocol. See http://cryptome.org/2014/01/nsa-rep-dirt.htm
I think it would be prudent to believe that the NSA was systematically throwing up road blocks then, and it would be prudent to believe that they're doing it now. I can't even see how such a claim would be controversial. It is their job after all, at least given the presumption that foreign bad guys use and rely on domestic technology. And we know that the NSA makes that presumption.
This is why I greatly respect historian as a profession.
A story: The Belgian census bureau in 1930-31-32-33 conducted a nationwide survey, noting down, among other things, people's religion. Although this was done innocently for records' sake, when the Nazis invaded the country, they had access to a perfect registry of Belgian Jews.
The takeaway is that the "good guys" won't always be in possession of sensitive information, no matter how good they are. Safety comes in people's ability to manage their own information, not in their ability to manage others'.
That's the main point here, which is excellently put.
Also, the graph by the bottom is great too -- the point that our data is becoming more and more intimate is really relevant. That's something really salient, which you rarely hear mentioned these days.
In other words, even if some genius devised a backdoor that was perfectly secure (allowed access only to those who were explicitly granted that access), had it verified by an international group of geniuses 10x smarter than she is to be sure it's actually secure, and granted access to only the most trustworthy organizations, which would of course act in the most transparent way possible with a clear cut mandate and individual accountability, it's still a violation of your privacy, a violation of your rights, and in the US a violation of the constitution.
The last sentence is important. A large number of people have some area(s) that they wouldn't particularly care to have law enforcement looking at closely, even if they aren't overtly criminal in any way.
I have had this conversation. The response is, "I have nothing to hide. My life is boring." I have gotten this response from people I know have things to hide. One guy had cheated on his wife and was illegally selling marijuana. But he said "I'm a small fish. They have more important things to deal with."
For people who warned about NSA spying, they watched as the response went from "that's impossible" to "that's illegal" to "that's not happening" to "it's targeted" to "it's everybody overseas only" to "it's everybody but why do you care." What I learned is, many people are apolitical and they will just use whatever is the most convenient excuse that justifies continued inaction.
The fact that there is a backdoor is not a violation of the Constitution. If it's only used with legal warrants, then it's well within the Constitution.
In the face of every other objection, THAT should be the one to attack proponents with. If you wouldn't give the powers to your worst enemies, you shouldn't even think about giving them to your current friends.
As one of the 6.7 billion people on the planet that is not a US citizen or resident, to me, the US is a foreign government.
For American companies to gain trust of markets outside the US, they have to take this into account. Apple and Google know this very well. Want to sell to China? Germany? Brazil? If the US government wants to enforce a backdoor on US products, that's going to do tremendous damage to their companies ability to compete in the global market.
Nah, they'll just go "f*ck this" and share the backdoor with other governments (who will be extremely grateful). And then all these golden keys will magically show up on Iranian and Pakistani servers, Kim Jong-un will parade them, etc etc.
- Second Amendment to the United States Constitution
Armament = Weapon.
The Department of Justice classified cryptography as a munition.
Munition = Weapon.
In law, it's always about the technicalities, down to the definition of words. Prove me wrong.
It would be a fabulous interpretation, although I'm somehow skeptical that we'll ever find a court endorsing it.
Seems like that could have been curtailed by a five-minute trip to the computer room and asking the guy with the longest beard.
Everything anybody needs to know about IT people can be learned from watching Star Trek.
1) don't allow you to actually secure yourself properly, because then you'd also be secure against them, and that seems to be a big NO-NO
2) with each new "cyber-threat" they seem to want to remove even more of our rights and liberties
So forgive me if the next time the US government yells CYBER PEARL HARBOR PEDOPHILES TERRORISTS - I'm not in a rush to believe them.
And, of course, these illegal numbers can be leaked.
Any "Secure Golden Key" would be a number which is the encryption backdoor key. Knowledge of that number would enable you to decrypt any content encrypted with that key, and if someone who were not a government actor were to discover that number, it would doubtless be decried as illegal to know, possess, or publish.
Later the same thing happened for Blu-ray. Only it was an encryption key that was discovered and subsequently illegal to distribute under the DMCA.
Imagine a hardware device that can vend session keys to any device, but will not do so unless it can push a notification to a public server that publishes the identity of both the accessor (who must authenticate themselves) and the identity being backdoored.
If the argument is that law enforcement will only use this when going through proper channels, let this be enforced by forced transparency.
The hardware device would be designed to never let the underlying key leave the device. So for the key to fall into the wrong hands, the hardware device would have to physically fall into the wrong hands.
The hardware would therefore need to be housed in a secure location.
This is what I personally would require before I could ever support a golden key.
This is a very interesting point. Sadly it's just as impossible to guarantee transparency when using the key as it is to guarantee the security of the key :(
The ironic thing is that eventually, those that the golden key is aimed at today will probably be the least hurt by it. The bad guys will take precautions and not trust the security of public services (they'll be using their own secure channels) while the good guys will continue to loose their privacy and liberty.
I don't get it. What's the difference between a "backdoor" and a "golden key" other than that one sounds less threatening than the other? It's seriously clear that WaPo simply has no idea what they're even talking about. Did they not have a Software Engineer on hand they could run this past?
The bottom line is: Sadly, there's no way to tell who the "good guys" are.
Much better is just to observe that any such key can't be kept from the bad guys, whoever they may be, and backdoored encryption over time tends to be no encryption at all. This is simply the truth. For all we know Snowden carried out some encryption keys, and if he didn't, for all we know Snowden II will.
Making the former argument puts you in a position of trying to argue uphill against what may be some very solid presuppositions about the goodness of the good guys; the second argument is undeniable, and very simple. (Of course some will managed to deny it anyhow; there's no such thing as a perfectly effective argument. But it's much stronger.)
Even if you give it to only the "good guys" and somehow ensure that nobody else could get it, there's no way to know the "good guys" will always be "good". It all depends on context. If the NSA is stopping people from harming innocents, they (rightfully) think they're the good guys. If they are secretly spying on said innocents without their consent, people (rightfully) think they're the bad guys.
I can believe that a lot of people have good intentions. But that doesn't mean their actions are always right.
From governments, to corporations, to malls, to your neighbors - they're all going to dig dirt on you given half the chance.
Scenario 1: Apple creates a secure system without a backdoor. Everybody uses it.
Scenario 2: Apple creates a "secure" system with a golden key that lets US law enforcement in. Nobody with a clue outside the US uses it. Several European governments ban it. Apple (and US) market share declines around the world.
> Threat #2... Even if you trust the U.S. government to act in your best interest (say, by foiling terrorists), do you trust the Russian government? Do you trust the Chinese? If a door is open to one organization, it is open to all.
Even if the Washington Post has no idea how cryptography works, this is incredibly naive politically.
"A court", you say? From which country? Google and Apple do business worldwide. Can China's courts access my data? Iran's? If Apple employees have the key, they decide. And if they decide, any country with enough monetary leverage can get the key. "You can't do business in China unless you give us access."
(Not to mention the head-in-the-sand implication that the US government would only get data using due process.)
The only way this can work is if the tech firm cannot access the data.
This is normally when somebody starts talking using GPG to replace TLS and DNSSEC systems. But this is normally somebody who doesn't realize how graph theory or at least path finding works, so they are ignored.
There's no substantive analogy to be made at all.
Also, there are quite decent alternatives to DNSSEC. In particular, DNSCurve: http://dnscurve.org/
PKI is admittedly a mess at the moment. News at 11. But that's irrelevant to arguments regarding Golden Keys, aka backdoors to your own, private information.
We need at least one viable alternative to DNS. Competition.
The humor writes itself
The old guy is seeming more and more prescient nowadays. Pretty clear that BF wouldn't have supported backdoors (especially not with the grim Orwellian doublespeak name "Golden Key.")
But who knows? I think it's a bit silly to presume what the Founding Fathers would have thought about modern concepts like cryptographic backdoors, when they would have probably been far more appalled by all of the black people and women running around unattended than at the existence of a standing army or laws governing interstate commerce or whatever.
generally speaking... i'm not implying all of the Founding Fathers supported slavery, obviously, just that their political and social priorities would not necessarily be what we might expect.
Many folks are blessed with the fortune of living lives without turmoil or great consequence, and for those folks it may seem perfectly normal to allow the state into their affairs. But we should not force the state to be in everyone's affairs, we should not chose to live in a police state out of some fool notion that there will be less crime or less terrorism there. That's not how it ever works out.
If I were to lose my encryption key, I would absolutely expect to lose my data. That's how encryption works. That's how encryption should work. If I can get my data back without the encryption key, someone else can too.
By "care" I mean "change buying habits" or "plop down money." Talk is cheap. People will say they care about security or privacy but since they do not change their spending behavior these are just words. Where people spend money is the only thing that matters in terms of steering corporate behavior or product road maps.
What users care about is convenience, overall user experience (UX), and cost, in roughly that order.
Enterprises and governments do care a bit more about security, but they often don't understand it and make poor buying decisions in that space.
* Edit: A few tech-savvy or politically concerned users do care about security and privacy, but this is (1) a small category, and (2) generally a group that doesn't want to pay for things and thinks everything should be free. #2 is the most significant factor-- if you don't pay for things you don't exist.
Maybe that's what you meant by convenience/user experience. Client side encryption with a nontrivial, irrecoverable key that must be remembered and kept secret by the end user is not a very usable system for the vast majority of people.
There are physical limitations on Moore's law. If you were to design a mathematically good/perfect n-bit cryptosystem that took 2^n tries to break, you can make some very strong guarantees. For example, even a perfectly efficient computer (1 electron flip per attempt) would (on average) take more energy than the sun will ever emit to break an ideal 256-bit cryptosystem.
> discovery of flaws in present encryption methods
This is the kicker. However, I'm hopeful. We have some really old cryptosystems that are still going 20-30 years later. As analytical techniques advance, I bet we'll find some even more rigorous constructions.
This probably still means that 256 bits is good enough. Of course, then there's RSA to worry about..
Otherwise, you may want to get on the phone with everybody using AES-128, et al and let them know. 2^64 would be rather trivial to crack these days.
And yet, if it could be implemented in a fast way, it would bring AES-128 into the range of trivial breakability.
I hope by then we're thinking about which secure system to migrate to.
- Anything the Government intercepts must be disclosed to the parties intercepted after some period of time.
- After three years, longer than the length of most criminal investigations, the Government must disclose intercepts, except that the Government may choose to withhold information for no more than 25% of intercepts. This allows for long-running intelligence operations and criminal investigations, while forcing disclosure of excess interception.
- Every three years thereafter, the Government must disclose all but 25% of the withheld disclosures. So, over time, more intercepts are disclosed, until all are.
- Records of all interceptions, listing the parties involved and all other pertinent data but not the content of the interception, made within the US or of persons or entities within the US, must be reported within 7 days to the office of the Attorney General of the United States, the Administrative Office of the U.S. Courts, and the Librarian of Congress, there to be held permanently and securely. This puts a copy of the data in the hands of each branch of government.
Be careful of unintentional incentivization. The govn't could just intercept 4 times more than it really wants.
...first we'd have to find at least one worthy keeper and it's a social problem, not a technical one.
(May have strong language)
There is a good debate to be had here, but it really shouldn't contain the word "wizardry" anywhere.
Just a shame they managed to trick Orin Kerr into that mess they call journalism.
I hope this stays at the top for 3 days.
Here's my take on it and why I think you dismissed it too quickly (but of course you are free to do what you want):
He-Man is a cultural theme that everyone understands and can relate you. If you watched it as a child, you know what he represents.
So the author is making the point that a "magic tool that only the righteous may wield" is about as realistic as a saturday morning cartoon. He does this in one line and with one image, instead of in many more lines of text that would have sidetracked his main points.
The political viability and constitutionality of the myriad issues mandatory escrow systems would give rise to were never tested. Furthermore, by the time key escrow was dropped the existing CALEA legislation was already a huge win for the FBI and DoJ. And given that it's 2014 and opportunistic/pervasive encryption is still more a pipe dream than a reality, the FBI and DoJ didn't care to expend more political capital pressing their case.
But now that we're potentially on the cusp of more pervasive encryption, we should all be prepared for some major PR battles going forward.