Let's say that you're using OTR to provide very strong end-to-end encryption for a conversation between yourself and a buddy, Bob. Maybe he's in a hostile area, and you're worried that if his government sniffs his traffic, that he could be executed for speaking to Americans.
Data in transit that is intercepted, if configured correctly, is almost certainly safe. No one will be able to immediately decrypt it because of the strong encryption.
So are you safe?
Probably not. The next step that government would take would be to raid your friend Bob's apartment, arrest him, and take his hard disk. His OTR key (and, if using Pidgin, account credentials in plaintext if stored) is plainly available on the disk. You now have the private key.
But what if he used Truecrypt or PGP full-disk encryption? His data would be safe from decryption then, right?
Sort of. If they're trying to break the actual encryption, they'd likely be unable to do so. Unfortunately, the weak point for Truecrypt disks or volumes isn't the crypto... it's the passphrase. The passphrase can be brute-forced significantly more easily than breaking the encryption itself. Furthermore, as xkcd so accurately pointed out, a hostile government will throw you in prison (or, worse, hit you repeatedly with a wrench) until you divulge your passphrase and data.
Encryption is great, and I encourage everyone to use reliably strong crypto. Will that keep your data safe from the criminals that stole your work laptop? Absolutely. Will it keep your data safe from the NSA? You're kidding yourself.
Remember that post a little while ago about how most logical fallacies aren't actually logical fallacies? Here, you are committing an actual logical fallacy. It's called "shifting the goalposts."
The article is in response to a dragnet surveillance program, where everyone's communications are watched and presumably datamined. It's very easy to do this, because nothing is encrypted, and everyone uses services that expose metadata (like who is IM'ing who).
Your comment is entirely true. However, it presents an adversary that doesn't want dragnet, but targeted surveillance. It assumes that Bob will be immediately arrested if his communications become encrypted.
This is not the threat model that we're faced with now. Let's say you and Bob communicate using accounts you've made on random XMPP servers using Tor, and all the messages are encrypted with OTR. Both servers are in the US, and the NSA's metadata database shows E83Gxw@jabber.org sending lots of ciphertext to PAnd9B@jabber.org.
This is "NSA-proof" in that the NSA would not know to link PAnd9B@jabber.org with you using their existing systems. They would have to drastically escalate the cost of their surveillance program with respect to you and Bob to figure out what you're talking about. Unless you really are a political dissident, conspiracy theorist who accidentally discovered the UN's black helicopter program, or radical Islamist, you are now out of the surveillance dragnet.
That is to say, unless the threat model changes, using privacy-enhancing technology will keep your data safe from PRISM and similar dragnet programs.
I absolutely agree with you, but I was addressing the potentially misleading title of the article, not the current dragnet observation (which is why I quoted the title of the article). Many people that may not be experienced in cryptography may think that these techniques would protect them from law enforcement, or that their encryption would actually be "NSA proof."
Your description of secure communication (Tor, anonymous XMPP, etc.) is totally accurate and a great explanation for those who may not be quite as familiar with security and OPSEC.
Unfortunately, with great enough amounts of metadata and computing power, this could theoretically be correlated as well. DNS requests or Tor connections correlated with encrypted messages timed between two individuals seems like it would be too hard to make meaningful, but it's not impossible. There's precedence for this when the FBI suspected Jeremy Hammond of being a certain identity on IRC, and correlated his sign-ons with connections to Tor.
You're absolutely, 100% correct, though, that this data (even correlated), must be part of some sort of targeted surveillance. I wasn't trying to counter the dragnet argument as much as provide clarity on what these security measures would or would not do.
Thanks for providing even more clarity in the areas I didn't address :)
You're quite welcome. I'm glad you liked the description. The title of the article is indeed misleading; a better one would be "PRISM-proof."
With a total record of the entire Internet (global passive adversary), you could definitely defeat anything using Tor, but I wonder if you could make it significantly harder still by randomly generating XMPP accounts every time. There's a potentially infinite space, and you can do XMPP registration in-band.
Of course, you could also just run XMPP servers locally as chat endpoints, and never have subpoena-able records. These could be Tor hidden services, which would give you the added bonus of global reachability. I think this eliminates most correlation attacks you could do; all you'd see on Alice and Bob's endpoints would be Tor circuits.
All this will make for a really interesting Wire reboot, if/when that happens.
Not to mention, if we (US citizens) are actually so concerned with how hopeless encryption would be to protect us from government gestapo, versus reasonable encryption to protect data-in-transit, we're truly and hopelessly fucked. It's a disturbingly good, and ironic, argument to stand up against the illegality and unconstitutionality of the NSA's program.
> This is not the threat model that we're faced with now. Let's say you and Bob communicate using accounts you've made on random XMPP servers using Tor, and all the messages are encrypted with OTR. Both servers are in the US, and the NSA's metadata database shows E83Gxw@jabber.org sending lots of ciphertext to PAnd9B@jabber.org.
Tor won't help at all if all of the long-distance network traffic in the country is being mirrored (as it has been in the USA for most of a decade).
Corollary: The NSA knows exactly who runs The Silk Road. Stopping drug trafficking is obviously not as high a priority to them as not letting potentially kinetic adversaries know that Tor provides no anonymity to someone who can (and does) monitor _all_ network traffic.
Tor messages are disclosed only if the NSA etc. run enough of the entry/exit nodes. Simply passively recording Tor traffic might let you do traffic analysis, but won't let you deduce the contents of that traffic.
They'd have the whole of the tor network, including entry and exit nodes. Why run your own exit nodes when you can just sniff the traffic of the existing ones?
That's the old mentality: "NSA/CIA/FBI/UGA will actively spy on me."
The way I see it, that's not the greatest danger right now. Instead, we should be worried about the government being able to passively spy on everyone at the same time, by indiscriminately siphoning and analyzing data.
However, according to https://en.wikipedia.org/wiki/Perfect_forward_secrecy OTR does provide "perfect forward secrecy as well as deniable encryption". Doesn't that provide some protection against rubber-hose cryptanalysis?
No. As I understand it, the "deniable" in "deniable encryption" is that after the first handshake, there's no cryptographic proof that the messages sent originated from you. This is flimsy legal evidence, because there are more messages that originated from your Pidgin instance that are actually yours compared to those that are somehow fake, and nonexistent evidence when presented to someone who's already torturing you.
Perfect Forward Secrecy means that even if you want to you cannot decrypt old messages, since the keys used are ephemeral and destroyed at the end of the session.
> Perfect Forward Secrecy means that even if you want to you cannot decrypt old messages
Which means, if they're jailing you until you do decrypt the messages, you get jailed indefinitely. Contempt of court has very few limits in some circumstances, even compared to being imprisoned after being convicted of a crime:
> Which means, if they're jailing you until you do decrypt the messages, you get jailed indefinitely.
Maybe, but they wouldn't be waiting for you to do something for them. They would understand that there was nothing you could do to help them decrypt the messages. i.e. your encryption worked.
However, unless there is a legal requirement that you maintain the records in question, a documented habit of destroying them is almost certainly enough to get out of contempt of court for not producing them, absent some specific reason to believe you kept those special.
One thing widespread encryption would do is make it impossible for the NSA to just slurp the combined textual output of humanity into hadoop and mapreduce over it.
They can use "hitting the suspect with a wrench" cryptanalysis on a solo victim, but not on a crowd.
Except that they still have traffic data, which was the main thing they were collecting in the first place. Encryption only hides the contents of your communications - it doesn't hide who you were communicating with and when.
If we accept all claims to be true, that the NSA does have a PRISM program, and is able to get data from Google, Microsoft, Yahoo, Facebook, etc., and we also accept the claims from those companies that they have provided no 'direct access' to their systems, then perhaps SSL is broken?
There's no need to throw him in jail. They can just install a hidden camera and record him typing the password. Next time he's out shopping the "maid" will drop by and the copy the hard drive.
> The passphrase can be brute-forced significantly more easily than breaking the encryption itself. Furthermore, as xkcd so accurately pointed out, a hostile government will throw you in prison (or, worse, hit you repeatedly with a wrench) until you divulge your passphrase and data.
Not to detract from the point of your post, but for anyone interested, that's what TrueCrypt's 'plausible deniability' feature [1] is for. It can be used to create a hidden volume on your hard drive with a different password from your main volume, so if you're ever forced to give up the disk passphrase by a government agency or anyone else, you can give them the password to the hidden volume, and (in theory) you'll appear to be fully cooperating. It is impossible (short of cracking the main volume passphrase through brute force) to prove, given only the passphrase to the hidden volume, that the main volume exists. Ideally, you'd probably want to put something "embarrassing" but legal on the hidden volume (e.g., gay porn), to make the "plausible deniability" for using full disk encryption more "plausible".
> Probably not. The next step that government would take would be to raid your friend Bob's apartment, arrest him, and take his hard disk. His OTR key (and, if using Pidgin, account credentials in plaintext if stored) is plainly available on the disk. You now have the private key.
The DEFINING FEATURE of OTR is that of forward secrecy; key compromise does not permit retroactive decryption.
Otherwise, we could just use TLS. (Technically, we could now, just enforcing the EDH modes.)
Weren't there cases of the government forcing people to give them their passphrase in the US already? Somehow I seem to remember something like this very vaguely.
You're probably thinking about the child pornography case that's going on right now where the accused was ordered by the courts to provide his passphrase. Higher level courts say you cannot compel that sort of evidence.
> The passphrase can be brute-forced significantly more easily than breaking the encryption itself.
Doesn't brute forcing this depend on the strength of the passphrase? For large enough N, if neither can be done in the next N years, does it really matter if it's significantly easier? Isn't there a non-negligible likelihood that in the next N years we'll figure out ways to break stronger forms of encryption but we won't figure out how to brute force strong passphrases efficiently?
Doesn't truecrypt use PBKDF2 or similar? In which case (assuming a good password) it would still be uncrackable in any practical sense.
Besides , in such an example the government would have to be suspecting bob already on some other grounds. In the case of a despotic regime they probably already have him in prison.
It does use PBKDF2, and the input to that is not just a passphrase but also one or more keyfiles, a salt, and some other metadata. My understanding is that the keyfiles should supply most of the randomness.
These edge cases are stupid. If 99% of Americans are safe, that's good enough. Dudes that actually break the law are not our problem, government should get them. The problem is all the other people who are doing nothing wrong other than have opinions or beliefs that the government does not like. Or in the case now are simply sending e-mails. If we can make it harder to get their shit that's enough.
This whole idea that no we can't do it, is defeatist. We can and we should do it, and then we should do more. As much as we can to make it as hard as possible. And if the government still wants to do shit, then let them. That's their prerogative.
I have a question that perhaps a cryptography expert could answer for me.
My father told me when he was young, he visited Oak Ridge National Labs on a trip, and while there, they told him they had satellites that could read the print on a newspaper. At the time, it wasn't classified information; it was just something that nobody knew. Approximately 15-20 years later, satellites with that capability became well-known. This indicates to me that top secret technology is probably somewhere around 15-20 years ahead of what the general public knows about. This may be less true today than it was back then since nowadays the equipment and factories to develop state-of-the-art technology run in the billions of dollars.
Where I'm going with this: is it reasonable to assume that "future technology" 20 years from now could crack AES-256 or PGP? If so, it seems reasonable to me that the NSA could already crack today's encryption for high-priority data. Add that to the fact that they tend to hire the very best experts in the field (mathematicians and cryptographers) and it doesn't seem entirely unreasonable to me that their decryption technologies are pretty good. Of course, I'm not talking about better technology in a brute-force sense; it would still be impossible to crack 256-bit encryption. I'm talking about algorithmic weaknesses.
But then again, I have only a basic knowledge of cryptography. Would any experts like to comment?
This has come up in the past on HN. As I understand it the newspaper story is bull. As for advancements in technology the answer is likely no - producing that technology requires an entire toolchain/industry that the NSA is unlikely to replicate with its size. The only shot the NSA has at pulling ahead of us is with entirely mathematical things like crypto (which they did at least in the 70s with differential cryptanalysis). With math you can simply hire a bunch of smart people and throw them in a room together which is much less capital intensive than the massive, fundamental research needed to advance technology ahead of the industry.
Huh, interesting! I never knew that. So then I'll need to ask my father about what he was told again. Maybe something got misinterpreted along the way.
While it's true that the stars are very distant, the only atmosphere that the light passes through is same atmosphere the satellites have to peer through (resulting in the same amount of distortion)
Hm, but with angles, distance matters to right? Bending light 10 degrees will mean a much bigger difference light-years away than a couple hundred miles away. That said, I could easily be missing something...
The light isn't being bent light-years away. It is being bent at the beginning of the atmosphere, resulting in the same degree of bending as the satellite has to deal with.
All of the detailed views in Google Maps are actually aerial photography rather than satellite images. Much easier to get that kind of detail from 1500ft.
The resolution of a lens at a given wavelength is determined by its diameter (Rayleigh function). We know how big the launch vehicles are, so we can estimate the largest size a spy satellite's mirror could be, and we can use that to compute the maximum resolution a satellite could have; it turns out to be something around 5-10 cm. In order to resolve a newspaper from near-earth orbit, you'd need a lens bigger than the ISS, and if such an object existed it would be one of the brightest objects in the sky.
Yes, aperture synthesis is possible with optical wavelengths. I don't know how practical it would be to actually use with spy satellites, or how much of an improvement they could see with it.
Keep in mind that there are a lot of people that track satellites, even spy satellites, as a hobby. I haven't heard of anyone discovering two or more satellites orbiting in the sort of tight formation you would expect would be required for this.
> This indicates to me that top secret technology is probably somewhere around 15-20 years ahead of what the general public knows about.
Err, maybe in optics at the time, but you can't just generalize like this. You can't consistently be ahead of everything all the time. More than likely the NSA suffers under Moore's law like everyone else.
I wouldn't be that sure. The NSA made DES _more_ secure in the 1970s by influencing IBMs design of the S-Boxes, regular cryptography research didn't even discover this until 1992.
This means that the NSA understood differential cryptanalysis 22 years before non-NSA researchers did. Their ability to brute force is meaningless if they have techniques to obviate it's necessity.
"Where I'm going with this: is it reasonable to assume that "future technology" 20 years from now could crack AES-256 or PGP?"
There are a few related issues here, and so the answer is a bit complicated.
The only evidence for the security of AES is heuristic, based on testing the output of the cipher to check for properties that secure block ciphers should have. Some new attack strategy could completely undermine AES. Similarly, PGP relies on block ciphers and hash functions that are based on such evidence.
On the other hand, public key cryptography has proofs of security under certain assumptions about the complexity of certain problems. A proof that P != NP is necessary to prove that PKE is secure, but it is not sufficient on its own and we do not even have that much.
Now, assuming that (a) the heuristic evidence for AES and various hash functions is a reliable indicator of security and (b) that the assumptions are computation complexity are correct, then both AES and PGP can be used essentially indefinitely. The reason is that your key size can continue to increase -- for AES, you can iterate the cipher (e.g. "triple AES"), and for PGP you can keep making your keys larger (16384-bit ElGamal?), and you will always be able to stay ahead of your opponent. There are issues with this approach, of course -- it would take a lot of computing power to actually use 16384-bit ElGamal, and eventually it would become impractical, which is why there is so much interest in elliptic curve crypto (which allows shorter keys to be used for the same level of security).
So the answer is, "Yes, from one perspective, No from the other."
> My father told me when he was young, he visited Oak Ridge National Labs on a trip, and while there, they told him they had satellites that could read the print on a newspaper.
It might be easy to find out. Just check when the NSA cancelled their newspaper subscriptions.
There's a general question among cryptographers: How far ahead of the general public is the NSA?
The answer: We don't know. Interestingly, back in the early 90s they were maybe 15ish years ahead of us. Decisions they made back then weren't understood until the 2000s. However, the gap may have been closed somewhat as of late. The public found flaws in SHA-1 (an algorithm by the NSA) in the 2000s that we believe the NSA didn't actually know about yet.
A general note on cryptography security margins (not really an answer, sorry, just some thoughts): The margins are designed to take into account future advances in technology. The community chooses the problems with the most conservative, best understood parameters, and that seem to be the least-likely to experience a break-through. As well, it tends to pick things with margins like "40+ years", meaning that even if we continue to improve our attacks and computing power at the same rate we have been, it is generally expected that it'll take at least 40 years before we're good enough to break it. (Obviously the experts turn out to be wrong sometimes.)
The community thinks very long term and tries to avoid ever picking anything that is very possible to be broken in just 20 years. It takes (in rough terms) at least 5 just to get through the review process and get the algorithm into standards, another 5 to get it into wide-spread usage, and another 5 to transition away from it. So a very popular crypto algorithm probably needs at least 15 years from its introduction run its life cycle, assuming theres no lull of happiness where it's just existing as a secure, commonly used standard. So there's little point in introducing any algorithm that is anything but a very low likelihood of being broken in 20, because it would spend only a fraction of it's lifespan serving as a widely-used, secure standard.
Last point: Breaks tend to be slow, and big breaks are usually smelled very far in advance. When we're 10 or so years out from a big break, we have a good chance of knowing it and we can start to transition away. Someone 20 years ahead of us may very well only find big breaks just before we start to migrate away from the algorithm.
Due to that, we might optimistically (optimism has no place in crypto, I know ;-)) think that an adversary 20 years ahead of us has relatively minimal advantage.
> they had satellites that could read the print on a newspaper.
This is physically impossible, for the reasons given by marssaxman below; specifically, the resolution of an imaging system is limited by diffraction. In order to read a newspaper from orbit, you would need a ridiculously large aperture. Furthermore, you've certainly seen declassified Cold War satellite and aerial (U-2) imagery. You know what it looks like. Do you seriously believe they had something else that could read newspapers?
Assume that over some time frame a crypto solution (the entire system matters as it's very easy to use encryption insecurely) will be compromised. You are essentially buying yourself time. So the question is how much time do you need to buy yourself.
Ok, so even if you select a key length of something very large which depends not only on bytes of the key but also the encryption algorithm as well. For a 4096 bit key that would be 1.0443888814131525066917527107166e+1233 combinations, assuming someone tried to brute force this at 100000 checks per second (a low estimate) it would take roughly 1.655867708988382335571652572800330388413185326368886003... × 10^1220 years to crack on average assuming the birthday paradox.
2^4096 / 2 / 100000 / 60 / 60 / 24 / 365
So that's a freaking long time to keep that data secure. Even radically scaling up the brute force attack across the entire world would be akin to boiling the oceans. (Not going to do out the cpu/watt/check number calculation to determine how much energy it would actually take compared to boiling the oceans...)
So are you safe? No because starting in WWII very smart mathematicians were finding ways to crack the algorithms and find patterns and holes in the encryption solutions that made the search space orders of magnitude smaller. So the best thing we can do is select well attacked, well researched but still secure systems, use a good key length and pray (I am not a religious man).
Edit: If you wish to be truly paranoid (don't recommend it), most of the important crypto research has been done by state organizations, this is how AES was selected from a group of submitted designs to NIST. Conversely there are few still secure and well researched algorithms besides AES out there, (Elliptic Curve basically, but those designs are under patents so not widely available etc)
Edit 2: Also wikipedia is a great starting point for understanding, but is not always complete. Still haven't seen it probably explain initialization vector or nounces before.
I'm not sure you saw the last part of my post :) I mentioned the same thing about brute-forcing and focused on the mathematical weaknesses aspect of it. I have written some ECDSA code myself from scratch so I am familiar at least with the very basics of it.
Yeah I think I started writing before I really got your question, my apologies :)
It is very possible that government based research on crypto cracking is several years ahead of the rest of the world, but the attitude to these systems around the time of the dot com bubble was switching from security through obscurity to working in the open. The reason is the research cuts both ways, the US relies on AES as much as you or I. If there is a problem with it, they would be burned as well.
From the article: "And while most types of software get more user-friendly over time, user-friendly cryptography seems to be intrinsically difficult. Experts are not much closer to solving the problem today than they were two decades ago."
I'm not sure I agree that user-friendly cryptography is "intrinsically difficult." It doesn't seem like it would be hard for email clients and even the Gmail frontend to pop up a message saying, "Your email is insecure. To let people send you private messages securely, set up your 'public key' now. It's easy." Then a short wizard would walk users through the process and automatically append the public key to all outgoing messages.
On the other side, if you were going to send a message to a friend, the email client would check if that person has published a public key and then ask, "The recipient allows secure messages. Would you like us to send this message securely?"
Google and Microsoft and other large companies are no strangers to implementing a feature and using their size and clout to quickly make it a de facto standard. The real reason we don't have easy end-user cryptography is that these companies would lose access to mine your data and provide new services on top of that (and the article mentions this too.)
"The real reason we don't have easy end-user cryptography is that these companies would lose access to mine your data"
Jeremy Kun recently wrote a good article summarizing some recent advances in encryption that make your statement somewhat less-than-entirely-accurate (scan for "differential privacy"):
And where is the private key stored? On Google or Microsoft's server? What then would be the point? (I assume you'll answer that it'll be done client-side, but JavaScript cryptography is a whole mess of fail. But that's a separate issue.)
And if it is stored client-side, what happens when the user inevitably loses their key? You and I might have backups in multiple places, and on an encrypted USB stick in a bank vault, but my dad doesn't, and the next time he spills wine on his laptop, there goes literally all of his e-mail.
Issue the user two smartcards, one for daily use, one that can be used to create a new daily use smartcard. Tell the user to keep the backup smartcard in a safe place.
Yes, someone will inevitably lose both. You just need to ensure that that is a rare event, and that there are alternative systems in place (i.e. that losing access to one system does not prevent people from living their lives).
I'm familiar. There's a big difference between "optional key escrow with a service I have chosen to trust" and "mandatory key escrow" though. Most importantly with regard to the ease of mass surveillance.
This is the real reason why cryptography hasn't caught on. It's opt-in by nature - No matter how hard you try, you can't send someone an encrypted message if they don't have a public key for you to use.
Actually, yes you can. Check out identity-based encryption and Voltage Security. It's currently in use by Wells Fargo, ADP, and other large enterprise customers.
The catch there is that IBE requires a centralized, trusted key-issuing service where you need to enroll to receive your message. If that's compromised, then game over.
Of course, you would need to be judicious about which group of key issuers you are willing to trust, but this method will at least reduce the risk. The other nice thing about this is that even if some key issuing service is compromised, the sender can force the receiver to switch services (compare to the TLS model, where dropping a CA is basically a coordination game problem).
Client-side, with a passphrase. A backup could be stored on the server or in your dropbox or wherever you'd like. It would be important, as part of the onboarding process, to communicate the need to keep the key safe and what happens to your old emails should you lose it.
As for using js for public key encryption, I've implemented it for a client and didn't have much trouble. There are libraries that workaround the usual problems. What have you seen out there that would cause a problem?
If you're doing public key crypto on the client side in javascript, then the client side JS must necessarily have access to the private key (unless you have a TPM _and_ browser hooks to use it). This means that suddenly the private key is vulnerable to any XSS attacker that can inject itself into the same origin as your javascript crypto code.
Fair point. XSS likely wouldn't be a problem in the case of a desktop email client. But in the case of a Gmail or Outlook.com frontend, I can see how you would be concerned about something in the js served up by Google or MS capturing the private key and sending it to the server.
That said, couldn't this be mitigated by having a strong passphrase on the private key? How hard is the wrapper to attack?
Also, couldn't security researchers easily monitor the packets on this process and sound the alarm should they find that the js served up by Google or Microsoft suddenly starts sending private keys to the server?
AFAIK a strong key passphrase would be effective at protecting the private key while it's at rest (stolen laptop / hard drive). However as soon as the private key is pulled into memory for a signing or encryption operation the passphrase doesn't matter as the raw key is needed at that point.
As for your second question, there are techniques that perform static and dynamic analysis on javascript to try and detect illegal flows or taint propagation (without having to resort to monitoring the outbound network traffic). See [1] and [2] if you're interested in that topic.
Also, this isn't a hypothetical attack. Basically the same setup is used for client-side bitcoin wallets, and there have been reports of thefts (stolen keys).
Not so. Nothing stops Linux distros defaulting to mail and file systems that encrypt everything by default, but the reality is that most people can't be bothred. I certainly can't, and I don't feel like putting myself out to encrypt everything in order to make it popular. As has been pointed out, the metadata of who you email and phone, while not probative in the same fashion as the contents of calls and emails, are nevertheless a significant source of data, and encryption won't alter that without major changes to the architecture of mail.
Intelligence-community-proof is somewhat of a fallacy. You can make it more expensive for the NSA to get your email, because then you're forcing them (or another arm of the government) to penetrate your client and extract the key there.
And, all you have done is make damn sure they keep your metadata records. Somewhere I read that sending encrypted email is an automatic flag, in the same category as using words that incite violence.
So to truly make it effective, encrypted email has to be the norm, not the exception.
That sounds unlikely, encrypted email is relatively common in business. For example many domain registrars used it as a mechanism for changing domain settings before "APIs" were a thing.
That's the question about all this thing that I don't know how to anwer. Just extend it a bit...
There are OSs that won't give root access to the NSA, encryption that the NSA won't be able to read and cloud services that the NSA won't be able to access even with cooperation of the CEO. Why none of them are widely used?
And I don't accept the answer on the article as suficient. Yes, a few things are harder when you want any level of security, but not all. There are plenty of applications where security just won't disturb you (like VoIP), and plent of places that put security above all other concerns and should care about this (like non-US military). Yet, nearly nobody chooses the secure path.
The glib answer is that it's a matter of triage - given usability, security and cost, choose any two.
While that is an influencer, the reality is a bit more complex. That social drive we've been going through for the last ~5 years means that developers focus on customer value. Also, while there's little new after Snowdens' leak - other than absolute proof - there's not been much demand, and hence incentive to make things secure.
Because the vast majority of people like privacy in theory but not enough to spend the hour it would take to learn how to encrypt their email and documents.
Seriously, how many HN users have spent hours complaining about privacy on here but still don't encrypt their own email? This isn't to excuse anything illegal the US gov't might be doing, but if it matters as much to people as they say you'd think they'd have at least taken some immediate action.
>Seriously, how many HN users have spent hours complaining about privacy on here but still don't encrypt their own email?
I would think that most HN users would be willing to encrypt their email, but know they can't convince their friends/family/etc to do so. Encryption takes two to tango.
To put things in perspective: at CRYPTO at some point in the past, I had a student stipend. I needed to send some documentation via email. I asked the person responsible, a prominent cryptography researcher (who will not be named), if they had a public key. The answer was, "No, I really should set one up but I'm just too busy."
When not even the researchers who run a top-tier cryptography conference are bothering, you know that it is not just about non-technical folks being clueless.
I'd like to think that's true, but I doubt it. I bet tons of HN users run in the same circles as other HN users and communicate regularly, but never even consider encrypting their emails.
Let me explain some of my travails trying to use PGP with Thunderbird:
The install of T-Bird wasn't too bad
The install of OpenPGP was not easy but I managed it. The instructions on the site were not all that clear and for an out-of-date version, but YouTube helped out a lot. My mom, the business owners, or a computer science teacher at Central High School simply do not have time to do this. This could be streamlined.
The making of keys and storing of data was totally obtuse, fortunately, the wizard guided me through a lot of it. This could be streamlined.
Now sending a message is where it gets tough. OpenPGP says that I have to use [shift]+"left-click" on the Write button in T-bird to make sure the html won't be used so the PGP message will de-crypted correctly. This is non-sense. Why is this happening?
Ok now assuming I have a plain text email I have to hit [ctrl]+[shift]+[s] and [ctrl]+[shift]+[e] to sign and encrypt. BS. This needs to be better. Just a pop-up and type in the pass-phrase (brilliant wording, btw, phrase makes this so clear it has to be many words long my mom can understand this).
Ok now my buddy can't read it because I did not send him a public key? What the hell are those? Why do I care? I thought I put in my pass-phrase? Didn't he? What is going on?
I sort this out, I find the public key and send it over. Now he can read it. But wait I have another buddy that I have to do this with. Where were those options in the menus again?
There needs to be a button that remembers if I sent the public key to them, sends it if I did not, and then automatically tells their email client that I don't have theirs and gets theirs with permission from them.
Awww, fuck it... the NSA can probably crack this anyway.
It's even worse than you think, because several of the things you said in there don't actually make sense. It sounds like you possibly didn't manage to get the message encrypted at all, just signed.
And how you exchange public keys matters a great deal -- if you just send them over email, you haven't actually achieved any meaningful security.
So yes. The entire process is a usability nightmare.
How timely. This NSA fiasco has prompted me to finish up an old project https://boxuptext.com/, which is a convenient webapp to encrypt message to url entirely on the browser. It's ready for use.
Not many people use crypto because in general it's hard to set up and hard to use. A webapp is accessible and easy to use and provide reasonable security.
I know there's a prevailing view against doing crypto in Javascript, and I've gone the extra steps to address the negatives. At the end I think the benefits of doing javascript on the browser outweigh the negatives. See https://boxuptext.com/faq#benefits
Its not about convenience.
its about money. Like everything really.
Using GPG/PGP for example (which IMO is the best solution) is nice. It has a good, convenient design.
The clients, UI, etc are terrible. Theyre extremely inconvenient.
That can be fixed. This needs some time and a little dedication.
Nobody will pay for a product that has proper, easy, fast PGP support across the board. Nobody.
Since it's not a trivial task, and the benefits are "only" privacy, it didn't happen yet.
If anything, people re-code their own, incompatible and generally lesser version of PGP, because they will get financial gain, or popularity from it (patching GPG doesn't give you as much popularity as making your own, you see.. and we're quite ego-driven / NIH-happy)
So, here we are. And I'm to blame too, I haven't worked on this either.
I'm secretly hoping things like PRISM will actually help making this move forward.
You could build a Chrome (or Firefox) extension that added GPG/PGP/SMIME to Gmail, you would have to intercept the emails before they were stored as drafts in order to protect the message in the inbox. You could use a plugin or native client to interface with the OS or desktop environment's keystore to keep the private key out of Javascript.
The key passphrase could double as the passphrase for symmetrically encrypting the message stored in the inbox.
Add to this a keyserver for automatically discovering public keys of contacts and you have a "good" solution between interested parties, without compromising recoverability of the majority of your messages.
You could do the same for Gtalk/Hangouts chats with OTR.
The main point for me is to have a government I don't need to protect myself from. And more generally, a society where I don't need to disguise my every action. Points about the impracticality of strong encryption are secondary. Here are some of them anyway:
* The vast majority of internet users don't have the domain knowledge needed to use strong encryption effectively. A classic example with e-mail is using a prominent phrase from the plain text of the message body in the (unencrypted) subject field.
* Any cryptography scheme is vulnerable to social engineering, attacks on the trust networks used to exchange keys, etc. Avoiding these requires a nontrivial and ongoing amount of effort even for expert users.
* Encryption complicates archival and search of content even for its author.
* Any service that would help users with the above would be legally obligated to provide information to authorities anyway.
> Experts are not much closer to solving the problem [of user-friendly cryptography] today than they were two decades ago.
I disagree. Recently there have been breakthroughs in homomorphic encryption. From Wikipedia [1]:
"...any circuit can be homomorphically evaluated, effectively allowing the construction of programs which may be run on encryptions of their inputs to produce an encryption of their output. Since such a program never decrypts its input, it can be run by an untrusted party without revealing its inputs and internal state."
While currently known constructions with the right mathematical properties are kind of slow, I'm sure that a lot of people are now interested and in the future we'll eventually be able to do it at practical speeds (especially with the help of future computers that are faster, and/or have more cores, and/or have dedicated coprocessors hard-wired for homomorphic encryption computations, like recent x86 chips have hardware accelerated AES [2]).
If this happens, websites will be able to implement features, like search, that rely on manipulation of user data, without having access to that data themselves.
> [Certain] features depend on Facebook’s servers having access to a person’s private data
Today this is true, at least for people who aren't on the cutting edge of research in this field. But it might not be true tomorrow, if homomorphic encryption ever becomes practical (both in terms of fast algorithms, and in terms of frameworks/libraries which make it easy for developers to use).
Off-topic remark: Homomorphic encryption will also impact the economics of cloud computing, since you'll be able to use CPU cycles provided by others without the security concerns of disclosing the unencrypted confidential data you want them to manipulate.
We do. We use it in the browser, for communicating between client and server. Service providers use it internally, for storing messages on disk.
The problem is not one of operations; the problem is one of law. Google (and others) have been forced under federal law to provide the plaintext to the government, or have their individual persons face jail time.
This is not a technological problem, and there are no technological solutions.
Hmm...the writer of the article almost echoes the exact same narrative of Defcon 18 Changing threats to privacy - Moxie Marlinspike http://youtu.be/eG0KrT6pBPk
This simply isn't true. Even if you (with likely a few orders of magnitude margin) overestimate total world computing capacity at 1e21 decryption operations per second it's going to take you about age-of-the-universe seconds to brute force a single 128 bit key. No amount of money or supposed 'exponential technology growth' is going to let any government brute force these anytime soon. And those are the smallest symmetric keys in wide use.
Compute power is far from the most effective brute force method to get someone's encryption key. Other means include; rendition, waterboarding, jail, similar threats and keyloggers.
The OP's contention was that government computational resources and the progress of technology itself make currently-available, robust encryption technology worthless. This is inaccurate.
The fact that someone can beat a key out of you is not what is usually meant by 'brute forcing' a key, in a cryptographic context.
Numbers about total world computing power (most of it probably stuck in GPUs doing windows animations and playing Call of Duty) bandied about are in the Ne18ish ops/sec range, age of the universe is in the 4e17 range.
And I have a great deal of margin by treating a 'decrypt and verify' operation as a 'basic operation' and using a completely preposterous time period like the age of the universe instead of say, 10,000 years. Nobody is brute-forcing 128 bit keys anytime soon, that's a pretty basic mathematical and physical given. But if you're particularly paranoid, you can just as easily use 256 bit keys - nobody is brute-forcing those until we hit the singularity and become a galaxy-encompassing brain. Even then it might give us a serious, millennia-long headache.
The only feasible thing against AES 128 right now would be if a flaw was discovered. This is also why it doesn't matter whether you use 128, 192, 256, or higher. If one is flawed, they all are. The reason we even have higher than 128 is simply because the government loves excessive amounts of redundancy. It lets them sleep easier at night, even if it's completely unnecessary.
@aryastark - What's the nature of AES 128 & Co., meaning: Who can and cannot discover flaws? Is this like open source where the whole world can watch or is this somehow a closed thing like Windows, MacOS, etc.?
These are openly available, highly reviewed algorithms - their adoption as standards, too, is done by a process of open competition. And it's probably safe to say that the amount of research and analysis being done on them in the open exceeds that done in secret by government agencies by a wide margin.
Also, note that while AES and friends are public and designed "in the open", the government also has a number of algorithms that are developed and used internally and are not "public".
Correct. The AES family of algorithms are public knowledge, not proprietary. They are open for scrutiny by mathematicians and crypto experts across the world.
As a (related) example, consider the SHA family of hashing algorithms.
Competitions are held in which anyone can submit their own algorithm as a possible contender. Like a beauty pageant, the entries are narrowed down over a few rounds until just a few are left. Eventually, one algorithm is chosen and declared the winner!
The whole process is done in the open and, similar to RFCs, anyone who wishes to may provide feedback.
The SHA-3 competition took roughly five years. You can read about the whole process on the NIST's web site:
I'm not a developer or math whiz so most of the underlying principles of crypto are over my head but read about the process (above) is quite interesting and gave me a much better understanding in general.
Yup. Except it's not that easy.
Let's say that you're using OTR to provide very strong end-to-end encryption for a conversation between yourself and a buddy, Bob. Maybe he's in a hostile area, and you're worried that if his government sniffs his traffic, that he could be executed for speaking to Americans.
Data in transit that is intercepted, if configured correctly, is almost certainly safe. No one will be able to immediately decrypt it because of the strong encryption.
So are you safe?
Probably not. The next step that government would take would be to raid your friend Bob's apartment, arrest him, and take his hard disk. His OTR key (and, if using Pidgin, account credentials in plaintext if stored) is plainly available on the disk. You now have the private key.
But what if he used Truecrypt or PGP full-disk encryption? His data would be safe from decryption then, right?
Sort of. If they're trying to break the actual encryption, they'd likely be unable to do so. Unfortunately, the weak point for Truecrypt disks or volumes isn't the crypto... it's the passphrase. The passphrase can be brute-forced significantly more easily than breaking the encryption itself. Furthermore, as xkcd so accurately pointed out, a hostile government will throw you in prison (or, worse, hit you repeatedly with a wrench) until you divulge your passphrase and data.
Encryption is great, and I encourage everyone to use reliably strong crypto. Will that keep your data safe from the criminals that stole your work laptop? Absolutely. Will it keep your data safe from the NSA? You're kidding yourself.