Hacker News new | past | comments | ask | show | jobs | submit login
What's the matter with PGP? (cryptographyengineering.com)
271 points by silenteh on Aug 13, 2014 | hide | past | favorite | 163 comments



At one point in this essay, Matt suggests that every successful end-to-end encryption scheme has employed transparent (or "translucent") key management. What he's referring to is the idea behind, say, OTR: two people can use it without the key handshake required by PGP.

Matt is wrong about this. He's being victimized by a pernicious fallacy.

It certainly appears that the most "successful" cryptosystems have transparent keying. But that's belied by the fact that, with a very few exceptions (that probably prove the rule), cryptosystems aren't directly attacked by most adversaries... except the global adversary.

In the absence of routine attacks targeting cryptography, it's easy to believe that systems that don't annoy their users with identity management are superior to those that do. They do indeed have an advantage in deployability! But they have no security advantage. We'll probably find out someday soon, as more disclosures hit the press, that they were a serious liability.

There is a lot wrong with PGP! It is reasonable to want it to die. But PGP is the only trustworthy mainstream cryptosystem we have; I mean, literally, I think it might be the only one.


Hi Thomas. I used to think this way too. I think this is certainly a fine way to think about things if your goal is to keep encrypted email deployment limited to the 3-4% of email users who are either technical experts with nothing to say and/or people who are sending obviously sensitive documents. It doesn't scale much beyond that.

Moreover, I would argue that a 'translucent' key management infrastructure /can/ be better in all ways than PGP. For example, imagine that Google provided a transparent key distribution service for all its users, but also allowed you to verify key fingerprints manually before sending messages. Congratulations -- for users who care, you've got something that works every bit as well as PGP. Everyone else isn't sending plaintext! Sure an attacker can compromise them, but it requires an expensive MITM attack. They have to be targets a priori, not after the fact. I'm struggling to see how anyone is worse off here, except through the nebulous reasoning that 'making things easy' makes people careless. Making things hard definitely makes people careless -- I've seen this firsthand.

But more to the point, even paranoid users have a lot of options that are better than PGP. Using ZRTP to establish secure channels is a very safe way to do things, assuming your attacker can't really forge voiceprints (and this seems hard, even for the NSA). From that point you can push strong public keys out to a dedicated text/email app. That we don't do this is not so much because it's a bad idea -- it's because so far people haven't tried it.


You must worry by yourself if the key from the other person is trustworthy. If you let the software do it, you need to trust the software developers, the software providers (how did you install the software?), etc. Therefore taking key management out of the hands of the users might increase usability, but it automatically decreases security.


In many cases, this is an acceptable trade-off. Security is not one size fits all; your average user cannot afford to be NSA-level paranoid. Otherwise we would spend all of our time verifying keys and not actually doing anything.


True. But he will think he is NSA safe, if he uses an App on his iPhone with the word Secure in the name. And this is actually what many of the Apps out there are promising without being able to say for sure themselves.


You can't protect people who aren't willing to put in a little effort to verify the programs they use are safe. Security is all about trust, and you shouldn't trust an unknown entity. But if such a product came from Microsoft or Google, the user would have more faith that it's secure.


>They do indeed have an advantage in deployability!

Deployability is worth more than might be immediately obvious. The question of deployment is directly related to the business of availability. If 99% of people used a crypto system that provided no real security beyond thwarting a global adversary, which also had the option for further measures to confirm identity when necessary, we would be in a much better position than we are now because it would mean that when I do have contacts I would prefer to use encryption with the software will already be on their system and I only need to instruct them in how to use it.

There's a fallacy in saying that no measures are better than imperfect measures 'because then you don't have a false sense of security'. Regardless of security measures or not the false sense of security is baked into how people use computers.

I challenge you to go explain to anyone you know who does not identify as a 'computer person' how email routing works and why it's quite possible for a 3rd party to read their mail. If they even understand what you're talking about, the response is an almost universally cool 'Oh but so many other people do it, and what am I supposed to do stop using email?'

The vast majority of people will not stop talking just because in the abstract they might be overheard.

EDIT: As a note, I think any centralized system of key distribution is fundamentally insecure.


You educate them.

The way I did this in a past life:

- Explain that email is a postcard. - Give examples of information that should not belong in email. - Provide demonstration of the issues people creates for themselves by breaking the rules. - provide tools to accomplish business needs (file transfer in particular) - Tell them that their email is subject to audit. - Implement technical controls to warn/enforce suspect behavior.

People want to do the right thing, but you must set clear expectations so they know what to do! The most secure email is one that is never sent.


As a 'computer Person' my response is much the same.

You can however get them to consider what they put in there by explaining "email is like a postcard; everyone who touches it can read it".


> I challenge you to go explain to anyone you know who does not identify as a 'computer person' how email routing works

I usually use the "It's like mailing a postcard" comparison.


PGP is only trustworthy if both parties treat key management with the utmost severity, and if everyone in the conversation maintains the integrity of a given thread (in the email case).

There are a precious few individuals for whom I have that level of trust in their management of their private key. I could not even trust my wife to manage a hardware key that I gave her, it would fall apart immediately; "I cannot use this key on my chrome book? I cannot use this key on my Galaxy? I cannot use this key on my iPad? Give me a soft key that I can use, or a cloud service..."

Therefore, PGP is not mainstream. There is a large population of people doing it incorrectly, and they must because they have no other real choice.


Both of these statements can't possibly be true:

* PGP is only trustworthy if both parties treat key management with the utmost severity.

* Transparent key management systems that rely entirely on heuristics and click-through warnings are trustworthy.


Really they can both be true. They're just true for different constituencies. The vast majority of users are not targeted by active attacks, nor should they have reason to be. They may be hacked or subpoenaed after the fact.


I thought that "active" attacks is exactly what we were trying to guard against?


Presumably the hope for transparent key management would be something like the CA system used in TLS, with certain reforms (certificate transparency? Namecoin?) which make it visible when a CA has been hacked or has collaborated with a global adversary.

So there's just a dozen or so central authorities who need to handle keys with the utmost severity.


I'm baffled by that attitude (and CT, as well). So, you found out after the fact that the global adversary injected themselves in the middle of your conversation with a source. What do you do now? Move to an apartment in Russia?

Also the things that break transparently-keyed systems do so repeatedly. That's what transparent keying means: it's mediated by machines, and factored the slow, clumsy, human interactions out. CT? Audit logs? It's like Lucy and Charlie Brown with the football, except Lucy is hooked up to a for() loop.


  So, you found out after the fact that the 
  global adversary injected themselves in 
  the middle of your conversation with a 
  source. What do you do now?
If you could reliably detect a CA issuing MITM certs to the global adversary, and if some unstoppable mechanism would respond to such detections by promptly dropping the CA's cert from clients' trust roots, and if being dropped from the trust roots put the CA out of business, then it would be extremely difficult to induce a CA to issue MITM certs.


What is CT in this context?


I believe it's Certificate Transparency, a Google project for globally monitoring SSL certificates.

http://www.certificate-transparency.org/


Can you (or anyone else) point me and others to a good text on properly handling keys? Thanks!


I would freaking love it if some good designer with technical knowledge, or help from technical colleagues, would create simple flowcharts of "How to use Encryption".

Each chart would be one page. It would walk a user through one step of using a PGP / GPG. It would include links to in-depth reading.

The final sheet would be walking through common mistakes that users make.

This is one of the things I'd pay for if I had FU money. I'm sort of tempted to kickstart the idea.


It's not doable, because there are major unsolved problems. Protection of a private key, for example.

The only outcome of such a chart would be reams of people utterly vulnerable and thinking that they are not.

If you constrain the environment, e.g. "How to use SSL certificates in Chrome on Mac OS X Lion" then there might be a chance that could fit on one page in an easy to understand format.


> Protection of a private key, for example.

I have a master key that was created offline, and use a subkey on a usb smart token. It works good but it was a bitch to set up. Apparently Qubes OS has a hardware-virtualized PGP container for protecting private keys, but that's not a viable solution right now.


The offline masterkey stored on an encrypted medium is the approach that makes most sense. This[1] guide is pretty decent, and somewhere on the Debian wiki is essentially the same information (but less nicely presented).

The OP's complaints boil down to two IMO: 1) poor UX; 2) trust is difficult to manage.

1) Seems solvable to me again with subkeys for encryption and signing (which I believe are created by default for GPG2.0 anyway).

2) Asking technology to solve the problem which only each individual can answer ("do I trust this person to be evaluate other people's identities as carefully as me?") is not doable.

The example of someone getting a journalist's pubkey and being stung by the (fixed) bug whereby the wrong key may be imported is again letting the user off the hook. If someone is not actually using GnuPG's abilities to examine the WoT and see who has signed the journalist's key etc then it's an example of magical thinking with "encryption" replacing any other nostrum.

I don't believe that problem can be solved with technology.

1. https://alexcabal.com/creating-the-perfect-gpg-keypair/


See the [GNU Privacy Handbook](https://www.gnupg.org/gph/en/manual.html).


That's probably because like most people, you don't really need to have a completely secure communications channel with your wife.

All security decisions boil down to some calculus of risk, impact and cost. The most sensitive conversations that my wife and I typically have remotely aren't ones that justify the cost (both in terms of hassle and $) of carrying a secure device around.

Personally, if I were in a situation where I was remote and my physical safety or livelihood could be compromised from an email, my wife and I would probably suck it up and run around with secured netbooks or something. In my case, I don't see that risk/impact calculation adding up to requiring PGP.


Someone else might need to have a secure communication channel with his wife. I never read the comment as "I can't communicate securely with my wife", more "My wife is not in the set of people with which anyone can communicate securely" (because secure key management is hard).


That's fine. Just don't expect to do it transparently with your iPhone.


See, for example, "De-Anonymizing alt.anonymous.messages" for an example of people doing crypto -including PGP- wrong.


I think the argument is that PGP is so difficult to use that by and large people just won't bother.

Yes, transparent key systems would likely be less secure than PGP. If the usability were significantly better and people used them, that is better than the alternative of using nothing. For many of these solutions, there is a window of vulnerability surrounding the key exchange that closes if you aren't snooping traffic at that moment, so it's not like they're completely insecure options; just that their attack vectors that may be considered acceptable risks in many situations.


An irreconcilable difference between you and I on this point: you think it's a good thing if people use bad crypto instead of no crypto, and I don't.

I don't think bad crypto makes the global adversary go "aw, shit, we better target someone else". I think it makes them go "excellent, something else we can get a secret appropriation to go break".


There's a difference between bad crypto and easier to use crypto. Just because it isn't 100% secure all the time doesn't mean it's not useful. If they have to go through the trouble of breaking each individual message rather than just gathering everything in a dragnet; that's good enough in my mind. You have to make mass surveillance expensive enough that it's no longer worth it.

You as an individual cannot fight a state actor. It doesn't matter how secure your crypto is; they can hold a gun to your head and force you to give up the key (or in more civilized countries, throw you in prison forever). If you become an individual target to a state actor, there is literally nothing you can do to stop them unless another state actor is willing to protect you: they have the resources of an entire economy behind them and there's no security solution you can cobble together that will be able to keep them out.

Even Snowden, who practices a paranoid level of OpSec, just assumes his electronic communications are being read. The only reason the CIA hasn't done an extrajuducial rendition on him is that he is living under the protection of another state actor (Russia).


My impression (I could be wrong) is that a lot of OTR users in particular are worried mainly about the local adversary: someone on the coffee-shop wifi, or the corporate IT administrator, sniffing their IM traffic. In that case you just have to have an encryption setup that's good enough to circumvent whatever analysis that class of adversary is likely to use.


If a global adversary can do it, then it is only a matter of time before a local crime syndicate can do it as well, after all stealing credit cards and other sensitive info is a booming business.

And what really bothers me is that this will give people a false sense of security. At least right now I'm seeing regular folks refraining from exposing sensitive info online out of fear of evil hackers that are often the subject of news. So yes, I think unencrypted email is better than a solution that isn't secure.


I agree, and I think that's especially a good argument for email. For mail submission at least, I think most users for that use-case are now either using or moving to encrypted SMTP AUTH with certificate checking, which should be fairly robust on the local side (between you and your ISP/company), modulo the problems that exist with the CA system. For IM though I think lots of people are more worried about embarrassment than crime: someone grabbing & posting your cybersex logs online; or your comments about office politics (or an affair, or whatever) being read by snooping IT staff, that kind of thing. Some people specifically use IM for office-politics stuff rather than email, because they assume (probably correctly) that IT staff can more easily pry into their email.

Of course for that use-case you don't really need end-to-end encryption: an encrypted connection to the IM server would be fine, and maybe actually better. But a bunch of services don't support that (though Google Talk does).


You need end-to-end encryption anytime the IM server isn't under your control. It's a reasonable assumption that if they could log everything, some manager somewhere has ordered them to do it regardless of legality.


If the threat you're trying to protect against is your local IT sysadmin eavesdropping on your conversations about office politics, the fact that Google Talk may internally log your IMs is close to irrelevant. The NSA might be able to get those logs from Google, but your coworkers probably can't.


This is a fair point and one I hear often, but can you be sure that for as long as you live, you will never have a reason to fear the global adversary?

I trust my government (within reason) at the moment but I'm not comfortable betting that they will never ever turn anti-gay and start coming after me.


I said this above, but I'll say it again here: if your government has identified you as a target, there's very little you can do but hide and hope you can find another government willing to protect you.

That said, crypto is useful in avoiding their gaze in the first place. For this, vulnerable crypto is better than nothing: assuming the vulnerable crypto requires a non-trivial and non-repeatable process to break, it's unlikely that even a state actor is going to bother breaking it for the entire population.


> That said, crypto is useful in avoiding their gaze in the first place.

I'm not entirely sure that I agree. I've long thought that using encryption above and beyond what the average person employs would be a great way to appear on 'their' radar. I don't have the need, so I'm happy not trying to find out. That said, if everyone had strong encryption enabled by default, no one would stand out, which I support.


Or you know, move to a country functioning under the rule of law. I don't get this mentality, but regardless, if you're paranoid then encrypt/decrypt your messages on a computer that's never connected to the Internet. There, problem solved.


That's what ROT-13 and secret decoder rings are for.


This elitist attitude is why I have no faith in typical "security experts" improving the overall situation with communication security for normal people.

Widely adapted open-source "bad crypto" can: - Raise user's level of awareness about security. - Create a market that can later be serviced by better crypto. - Encourage creation of infrastructure that can be later used in better crypto. - Encourage investigation of better UI.

Also, other poster are right about increasing the difficulty of executing an attack.


> Widely adapted open-source "bad crypto" can ... raise user's level of awareness about security.

You know that study that showed that people wearing seatbelts drive more recklessly, effectively exactly compensating for the increase in safety provided by the seatbelts?

I have a feeling people who think they're using a secure cryptosystem will speak much more freely than those who don't--meaning that the net effect of convincing users to use "bad crypto" is giving the global passive adversary† more interesting morning reading.

† Do we have a name for this guy in cryptography placeholder terms yet? Nathan (the NSA agent), maybe?


> You know that study that showed that people wearing seatbelts drive more recklessly, effectively exactly compensating for the increase in safety provided by the seatbelts?

Err, the first part of that statement may be true, but I don't think anyone ever put forth a claim backed up by data that they cancelled eachother out. People may drive more recklessly with seatbelts, but the law undoubtedly saved a lot of lives. You may be thinking of motorcycle helmet laws; which seems more plausible given that high-speed motorcycle accidents are much more likely to be fatal regardless.

As for the "global passive adversary", I tend to side with the term "state actor" since the only entities with the power to collect data on that scale are governments as they can legally force every telecom company in their jurisdiction to install taps while keeping their existence classified.

You're probably right about the risk compensation here; with the caveat that anyone who rises above the level of a "common criminal" would simply not trust online communication at all. Islamist terror groups use a known-courier system for all planning and communication because they just assume all electronic communications are being monitored.


I have a feeling people who think they're using a secure cryptosystem will speak much more freely than those who don't--meaning that the net effect of convincing users to use "bad crypto" is giving the global passive adversary† more interesting morning reading.

I'd rather have that than the society where everyone silently accepts global surveillance as a norm and treats any kind of self-expression that gets you into trouble as pure stupidity on the part of the speaker (e.g. blames the victim).

What we have right now is a vicious cycle. "Normal" people have no cryptographic capabilities. So they simply adapt their beliefs and behavior to this reality. This makes them think of themselves as different from people who do have cryptographic capabilities. This means they develop "us and them" mentality and can no longer empathize with anyone seeking any level of digital privacy. This means there is no popular support for crypto. So "normal" people have no cryptographic capabilities.

I believe that "consumer cryptography", even if it's weak, would break this cycle.


Any crypto is about increasing the cost to attackers. There's no perfect crypto, and even if there were we would still have the $5 wrench attack. Security enhancing crypto is anything which is more expensive for the attacker than for the implementer, and I think encrypting with an unauthenticated key is firmly in that category.


When I ask people for their e-mail address, they give it to me.

When I ask them to verify their pgp key, it's less easy.

How could the verification of the key be built into the address they give me? Something DNSSEC based I guess.


Yes. DNSSEC. Because what we need is a trusted arbiter, and who better to fulfill that role than the governments of the US, or whatever country happens to own the TLD I use?


Well, it's at least better than X500. With DNSSEC you only have to trust one government, not all of them.

And you can also get more than one domain, in more than one TLD. Not very practical for automatic verification, but for surviving a manual verification several governments would have to collude against you.

I still think that there must be something better. But I'm probably not good enough to create it.


Governments, always trustworthy, especially for their own citizens.


Why is it not easy to verify a key (fingerprint)? Put it on a business card with the email address or read it over the phone?

Also, DNSSEC isn't much more secure than our current CA system.


Now your collection of business cards is susceptible to tampering (no cryptographic authentication!).

Do you never leave your collected business cards unattended at a conference or trade fair? Possible, if you put them into your shirt pocket.

Do you store them in a vault lomg-term? Probably not.

Is it impossible to impersonate you, either with a human sound-alike or by voice generation software?

If you want perfect security against everyone, it quickly spirals out of control. You should probably remove the wallpapers in your house regularly and inspect what's underneath. ;-)

I'm not entirely serious here, but I'm surprised at the optimism about what the individual can possibly achieve.


I was imply it was a person-to-person handing of a business card and that if it wasn't then it could be handled via the phone (which you would need from something other than the business card). But, yes, I didn't explain that as well as I could have.


"Now your collection of business cards is susceptible to tampering (no cryptographic authentication!)."

You are missing the part where it was suggested that the recipient of the business card telephones you and asks to verify the fingerprint.


In response to Tomte's criticism, this all boils down to the certification level http://tanguy.ortolo.eu/blog/article9/pgp-signature-infos 1) A fingerprint on a possibly compromised business card == 0 2) A fingerprint verified by phoning someone == 1 etc, And associated with that independently is of course the level of trust.

Sorry Tomte for not replying immediately to your message, but I've posted too much on this apparently.


You are missing both the "or" in his sentence (i.e. he describes alternatives, not cumulative measures) and my retort to the verification by phone.


Either you need a trusted third party or you need to pass something that looks like (at best): 4UpbRAXYMgrESrAwiLPYymNNni1hwyL2JEK7zz2SN52t

You could do that by printing it on a business card or reading it over the phone, and then the other guy is going to have to type it in somewhere.

The reason trusted third party keeps on coming up, despite all the myriad fundamental problems, is exactly because slinging that around is so unattractive.


There IS a nicer way to present fingerprints to be much more human readable: map every few bytes to the whole dictionary word. There is a RFC for that:

http://www.ietf.org/rfc/rfc1751.txt


I've seen business cards with PGP fingerprints encoded as QR codes. That's a pretty neat idea.


Except you never notice when someone switches the QR code, as Tomte says.


Which is a very good point indeed.


"Hi, my email is john@doe.com and I'm johndoe on keybase.io" could work.


You nailed it. I stopped reading and started laughing when iMessage was hailed as a successful platform addressing these issues. The only thing securing iMessage is the procedure for processing law enforcement requests.

PGP is a tool that requires expertise to operate effectively. That isn't good or bad, it just is.


I think if you read it that way you probably didn't click the link to the extensive criticism I've written about iMessage's key distribution. Saying something is better than plaintext is not the same as saying it's good enough to send secret documents.

http://blog.cryptographyengineering.com/2013/06/can-apple-re...


I'm not terribly familiar with your writing, but I disagree that plaintext is inferior to ineffective crypto. The problem with convenient things is that you are giving up control in exchange for expediency.

In my professional life, I've provided communications solutions to a variety of true VIPs in key executive and other roles. The advice that their counsel gave on most occasions was simple: exchange secrets orally, and in person.

That means don't use the courthouse wifi, don't conduct critical deliberations via email or public telephone networks. If you google around, you can see plenty of referenced to US state governors only communicating via Blackberry text or obscure phone lines, mostly to control short term leaks and keep deliberations out of the public record.

The point of all of this is that keeping secrets is hard, and if you must exchange them electronically, you need a strong set operational protocols to keep those secrets.


> In the absence of routine attacks targeting cryptography, it's easy to believe that systems that don't annoy their users with identity management are superior to those that do. They do indeed have an advantage in deployability! But they have no security advantage. We'll probably find out someday soon, as more disclosures hit the press, that they were a serious liability.

You're probably talking about the PKI here. However, after a year's worth of Snowden leaks (and perhaps other leakers too) there have been zero documents discussing routine or even occasional sabotage of the PKI.

You suggest that we'll "probably" find out "someday soon" that only PGP works and everything else sucks, but we already went through that acid test. PGP was such an epic failure Snowden and Greenwald failed to connect entirely, and there were no big reveals about certificate authorities.

That doesn't mean the CA system is infallible, just that attacking endpoint security is easier. But as Matt's GPG example shows, GPG endpoint security is just as pathetic. Heck I didn't realise that GPG couldn't safely import public keys by fingerprint. How the hell does software like that, which has been around so long, fail to do such a basic check? QUANTUM would have made mincemeat of anyone trying to communicate securely using mainstream PGP implementations, whereas most S/MIME implementations I know of wouldn't have been fooled so easily.

Hand-waving about how anything other than PGP is trustworthy doesn't fly with me: there's too much real world evidence from real world adversaries that it sucks and other systems work better.


I strongly agree with Matt that a transparent key exchange is crucial to boost PGP adoption. In fact, a few weeks ago I proposed a mechanism to do precisely that, over automated email responses, see http://blog.zorinaq.com/?e=76 (the mechanism I describe also solves a bunch of problems regarding how to share other personal information).

I would love to hear input from HN readers...


I gotta back Matt here. While none of the three of us would endorse the iMessage key exchange model, the truth is that the team that implemented iMessage crypto have kept more communications safe from dragnet surveillance than everybody commenting on this HN article combined.

I personally think there is a good middle ground where identity management is invisible to most users and customizable by users with more challenging threat models. That is what we are aiming for.


Hold on. The iMessage key exchange model works in part because it has trust anchors; it's not a pure peer-peer system. You can see the tradeoff it makes by looking at any discussion of iMessage ever and noting how people discount its security because of those trust anchors.


> […] with a very few exceptions (that probably prove the rule)

Off topic and pedantic but exceptions do not prove rules.

A specific exception like “you are allowed to do X when Y is true” may be used as proof of an (unwritten) rule about X being forbidden.

I.e. here we use the exception (when Y is true we are allowed to do X) to prove that there is a general rule saying that X is forbidden.


That's also wrong. The expression uses an archaic transitive "prove," which means "to test." Thus, "the exception tests the rule," which makes more sense.


Do you have a source for that?

From http://en.wikipedia.org/wiki/Exception_that_proves_the_rule

"The exception [that] proves the rule" means that the presence of an exception applying to a specific case establishes ("proves") that a general rule exists. For example, a sign that says "parking prohibited on Sundays" (the exception) "proves" that parking is allowed on the other six days of the week (the rule). A more explicit phrasing might be "The exception that proves the existence of the rule."


Just an idea that might not be very practical but what if there was X number of "master" public keys managed by trusted groups that could be used to verify other public keys and they were posted in plain text on billboards across towns (maybe could replace CAs?)... just like you can use the Debian keys to verify the Tails OS key..


If it became at all popular, some people would have to know the private key half (or be able to decrypt it for use, same thing)... and those people would be subject to bribery, coercion, and rubber-hose cryptanalysis. I would not want to be one of them; I have relatives whom I love.


Learning to drive a car is hard. You have to watch the road, coordinate hands and feet, anticipate other drivers' moves and so on. No one bats an eye about this, because "it's a skill you have to learn". If you don't play by the rules of the road, you'll end up killing someone, or getting killed.

But for some reason (maybe because it's generally less life-threatening), people seem to expect deeply complex subjects, like e-mail encryption and identity management, to be easy. "Yeah, if you can just give me a fancy, easy-to-use GUI with forward secrecy, that'd be great!" Sure, it'd be great. But it's not going to happen. And that's not because PGP is broken -- of course, it does have its weak points. It's because people are too lazy to bother to learn.

What's the old addage? You can have quick, cheap and reliable. Pick two? Same here. You can have secure, easy to use, and reliable. Pick two.


I can't drive. Not for lack of trying.

I seemingly can't develop the the muscle memory of unintuitive (to me) concepts like "clockwise is right" and "counter-clockwise is left", nor can I get used to the way a gas pedal actuates non-linearly. These are just two examples of a long list of problems that I have with the controls.

Then there is the utterly confusing signage.

I just can't do any of it, not without sweating like a pig. And I definitely can't be doing all of it at the same time. That's just nuts.

Every time I pick up a PS3 controller I have to learn to use it again, which depending on my withdrawal period can take anywhere from a couple of minutes to like half an hour. The only reason I can touch-type is because I'm doing it every day.

Please don't make the assumption that other's experience of the man-made world around us is in any way similar to yours, that's just not true.

Oh, and I have had absolutely no problem figuring out PGP encryption usage.


In what way are you contradicting Tharkun? I can't figure it out.

"Please don't make the assumption that other's experience of the man-made world around us is in any way similar to yours, that's just not true." Where did s/he? I'm genuinely stumped.

Learning to drive a car IS inherently hard (as in complex), just as "e-mail encryption and identity management", and that is a fact. If you for some reason are more or less adept than the average person at either of these things, I don't see what difference that makes to the reality of the situation. Like driverdan said, if you simply can't do something, you'll have to find a workaround.


What I said was to augment what he said.

But here's a true, I swear, you can probably check that it is, story just for you:

I didn't know how the whole army thing works when my time came. Just wasn't ever interested. Didn't know my sergeant from my brigadier.

The army took me seeing that I'm fit, for certain values of fit. Put me through boot-camp. That's when I landed in military jail for the first time. I could take everything that was going on in there only with a dose of humour, but grinning 24/7 was apparently not acceptable behaviour. But that wasn't what got me in jail.

There was one thing I could not take, absolutely. Still can't. There wasn't a moment to myself, I couldn't ever get alone in there. I had to always be accounted for, from their point of view; but from mine I couldn't find a place or the time to take a short meditation. I don't know what I have, but I've been getting through it all my life with meditation, and once that wasn't available I was heavily depressed. I thought of suicide, I talked of suicide, and that's basically all I ever talked or thought about. While grinning at anything they had to say to me in return.

So I went home. Took my stuff and went out the gate.

Later came back and went to military jail for a sentence. Then for boot-camp number two, as I didn't finish one.

But later when I did finish it on my second attempt, they didn't want me anywhere near a base anymore. They wanted me out of base for most of the time. They way to achieve this in the army is to make you a driver. This way you're driving around, not being in the base, problem solved.

If you read my previous comment, you know what the problem with that approach is. They didn't. So I explained, repeatedly. Any time they'd let me see an officer that was in charge of that kind of thing, I'd explain that I can't drive, won't ever be able to, and not even torture can "change my mind".

Either they have decided to test that last bit empirically, or just couldn't wrap their heads around the idea of someone not being able to do something that "any idiot could"; but long story short I've done 7 months of prison time in three separate terms over the span of 1.5 years before they saw me as unfit for service and let me go, and be as I am.

That is to say, you're not always in a position to find a workaround, if I may refer to your closing sentence.


If you can't figure out how to drive what do you do? You don't drive, you use an alternative. The same could be said for tech you can't learn.


The analogy to learning to drive is flawed, because we learn to drive with muscle memory, and our intuitions about the physical world serve use well when driving.

Neither of these obtains with cryptography. Mistakes are not obvious and you have to concentrate to get it right.


It is a matter of skills which people find useful.

Learning to fly a plane is much harder than learning to drive a car, and almost no-one learns how to fly a plane because it just isn't a useful skill for most people.

I did spend time learning all about PGP, and I wish I hadn't bothered, as the skill of learning PGP has zero value to me. On the other hand, learning to drive a car, which took longer, is much more useful.


I don't agree. I use GPGtools on OSX with the openpgp smartcard and it works flawlessly and is truly convenient. Furthermore I can use 4096 bit RSA keys.

One thing I have learned watching the crypto forums over the years is that there are well calculated misinformation campaigns trying to dissuade people from using secure methods. I see it again and again and the people on this forum need to think carefully before swallowing this as sincere.

I would never never never trust a solution from Google or any large American corporation. They have just been caught lying about prism (Google) and taking bribes (RSA). These companies are now and always will be totally untrustworthy.



You talk bad about RSA and use RSA keys at the same time?



Touché


Why isn't RFC 1751

http://www.ietf.org/rfc/rfc1751.txt

used to provide the fingerprints that are readable? Verifying would be much more convenient than now.

"For example, the 128-bit key of:

         CCAC 2AED 5910 56BE 4F90 FD44 1C53 4766
would become

         RASH BUSH MILK LOOK BAD BRIM AVID GAFF BAIT ROT POD LOVE
Likewise, a user should be able to type in

         TROD MUTE TAIL WARM CHAR KONG HAAG CITY BORE O TEAL AWL
as a key, and the machine should make the translation to:

         EFF8 1F9B FBC6 5350 920C DD74 16DE 8009"


I haven't used the "original" PGP program for a very long time, but IIRC it had the option to use RFC 1751 or a similar scheme. A quick web search finds to options to use this scheme in GnuPG. Strange!


You're probably thinking of: https://en.wikipedia.org/wiki/PGP_word_list

[edit: See also this thread: http://lists.gnupg.org/pipermail/gnupg-devel/2001-March/0170...

I think they're missing (what I think was) the point of the list -- I seem to recall this was meant for use over the phone, and the words were selected by machine learning to all sound different. I've always thought this was a massive case of over-engineering -- and also somewhat narrow-sighted. I mean the list starts out with "Aardvark" of all things! ]


Yes, that's the list I was thinking of. The linked discussion was interesting. Good find! Perhaps the GnuPG devs are right that hex finger prints are more internationally viable than English words.


I've been thinking a lot about using words to stand in for numbers (eg: fingerprints) -- and I think the idea is good. But i18n should be considered -- and the PGP world list is probably the worst example I know of. The only thing it has going for it, is that is already collected/created -- but seeing as how it's not implemented by gpg -- that is kind of moot.


I can see why it would be hard to find one set of words that are easy to pronounce/understand in every language and every culture. But perhaps that isn’t necessary? I’m thinking that this feature needn’t be available in every localized version of GnuPG. Or that there could be localized word lists for various languages.


Maybe. I think sticking to something like the phonetic alphabet or "simple English" would be ok.


In my opinion, mail crypto needs to become mainstream usable. E.g. even trivial contents should be encrypted by default and this should be usable by default. Currently, S/MIME does a better job than PGP.

While the CA-model seems to be broken in most X.509 use cases, like TLS/SSL, where a duplicate certifcate can be used to do a man-in-the-middle-attack, this does not really affect S/MIME, especially after both parties started a "conversion". People that need to communicate "really" secure, should therefore be able to ignore all "CA-Trust" and white-list certificates on a per user basis (e.g. like PGP).

Ordinary communication still can by default fall-back to the existing CA-model to keep it usable (but not secure).

Some steps:

1. We need more love by the MUA-vendors, who mostly support S/MIME but it's still a PITA to use. Google e.g. still does not support S/MIME on android, see https://code.google.com/p/android/issues/detail?id=34374

2. We need CAs that are usable. StartSSL is nice and free, but it's not easy to use. Lower the entry barrier for getting and renewing/recreation of certificates

3. (most important) Make it easy to manage local CA-trust. On each new system, the user should be able to select a "trust no CA/whitelist only" approach and then be responsible for trusting other parties. No vendor (Microsoft, Apple, Google, Mozilla) should silently distribute and trust new CAs without users consent.


I don't understand why we don't just apply the OTR model to email.

OTR's big latency is the initial handshake. After that, you can persist the session. But email is intrinsically a high latency medium anyway! We can afford 1 or 2 days delay to setup an initial encrypted connection. In fact, we can display a big "not encrypted!" message to users, while still letting them exchange email, until we've done the handshake and socialist millionaire protocol (or verified keys by some other means) setup.

I am willing to bet like 70-80% of people who send email to each other physically have their email clients online at the time they do it, even if they take a lot longer to answer - especially with the number of smartphones out there. So we can setup an OTR session after 1 message the vast majority of the time, and then reuse the same session as much as possible.


The "global CA" model is bust. How it was ever considered usable is beyond me, but we now have more than a decade of experience seeing just how bad it is. It is utterly, fundamentally broken and easily subverted by state actors.

For now, the only reasonably usable secure key exchange method seems to be what WhisperSystems are doing on their phone app (safe against MITM if the parties know each other, and very hard to MITM even if not - especially not automatically).


What's wrong with blockchain based solutions like namecoin?


If I understand correctly, namecoin is a distributed DNS replacement. Is there a way it addresses impersonation (e.g. MITM?), if so, can you please point me at documentation?

DNS does not address it, and even DNSSEC does not (if you can forge the certificate, and you can mitm the traffic - which state actors are all capable of - then it doesn't matter that you can't forge the DNS response itself).


You can place your own self-signed public key in your namecoin record. There is no longer any need for certificate authorities which can be coerced into forging certificates.


Well, if this is properly supported by software using namecoin for DNS resolution, then - yes, this may work. The proof of the pudding, however, will arrive once it's eaten. I am not familiar with namecoin to point where the potential problems are, but do note that the failure of CAs is not in the cryptography but rather in the trust model. In modern cryptography, the problems are almost always with the practice, not with the theory.


> software using namecoin for DNS resolution

Actually, it should be the other way around: dnschain [0] bridges DNS resolution and namecoin, so there's no need to modify existing software.

[0] https://github.com/okTurtles/dnschain


Cool! wasn't aware of it.


Comodo give away free S/MIME certs:

http://www.comodo.com/home/email-security/free-email-certifi...

They can be generated and installed into the OS keystore by your browser automatically. By the low standards of crypto it works pretty well. Any old email client supports it out of the box.


Or, you can avoid the obvious problem in using a certificate generated by a non-trustable actor and use one which relies on the WoT instead: http://cacert.org


I find WoT based solutions to be much less trustable overall.


We already have a start there. Make all NON-EV certs free.


I'm all for that, but realistically how will you verify identity? If there is no identity verification what is stopping me from going out and registering for "google.com" and then using it in my MITM attack?


I think most non-EV SSL certificates these days are "verified" by sending a message to whatever e-mail address you have on file in your whois record.

Also on Chrome at least certificate pinning should prevent that particular scenario.


Chrome's certificate pinning database doesn't scale at all (i.e. it works on less than 0.01% of the internet).

As to the whois thing, what is stopping me from hijacking a domain, changing the whois and then generating keys? The webadmin might never even know. You don't even need access to their email.

Or to put it more realistically: What is stopping the NSA from pressuring a domain registrar into altering the whois for a brief period in order to generate MITM keys?


Nothing, really. But that is the situation today.


> If the NSA is your adversary just forget about PGP.

Why? Last I heard, breaking PGP was equivalent to being able to factor large integers into a product of prime numbers. So, NSA is able to do that, and no one else can, no one in the public heard about it, no university research mathematician published about it, NSA has mathematicians who figured out how to do that but their major profs back in grad school don't know how, no one got a Fields Medal for it, etc.? I don't believe that.

What's going on here?

He means I need a Faraday cage? Okay, tell the NSA I have one; put it in place this afternoon.

He means the NSA has trained cockroaches that can wiggle into my hard drives while I sleep and steal all my data? If so, then fine. I'll spray bug killer.

Otherwise, why should I believe that the NSA could crack my PGP encrypted e-mail?


If the NSA can't attack the crypto (not saying they can, but maybe) they'll attack endpoint. Systems like QUANTUMINSERT allow them to selectively MitM your plaintext HTTP connections, directing your browser to load some asset that exploits a browser vulnerability, and using that to install persistent malware.


Yes, usability is the problem. But none of these proposed solutions manage to actually solve the usability problem without throwing out the security.

We really do need to let users manage trust, because trust is a rich concept. And humans are actually really good at trust, because we've been thriving and competing with each other in complex social situations for a long time.

The trick is finding ways to recruit people's evolved trust behaviors into an electronic context. That is, can we build meaningful webs of trust through repeated social interactions, just like in real life?

So it's not the mail client vendors who are best positioned to solve the problem, it's the social networks.

(Whether they want to solve the problem is a separate question.)


I'm using TextSecure on my Android phone as a Messaging replacement and it is great. However it appears to me that the service is not decentralised in any way. Is that assumption correct?

I like the email model such that anyone can install and run an email server. I'd actively push friends, family and colleagues to use a decentralised email replacement that was as easy to use and secure as TextSecure.


From what I understand, there's some federation baked into the protocol and it works with Cyanogenmod (they run a server for their users), but it's not really documented anywhere in detail.


I don't trust TextSecure. It's too transparent. It is entirely unclear what happens if it can't send an encrypted message. It's unclear where and how much I'll be billed (important to those of us outside the US). And sans user authentication, there's no real trust model there.


Feel free to not trust it, but it does indicate whether the message to be send will be encrypted or not: in the current version, a padlock will be locked or unlocked on the send button. If you receive an unencrypted message from another text secure user, it automatically detects the other party is using text secure and offers to initiate key exchange.

The billing criticism is fair and warranted; currently if your sending over SMS, the first message can only contain 60 chars due to protocol overhead, so you often end up with short messages costing multiple SMS.

There is a way to verify keys (manually!) but no indication that you have verified them.


The user needs to control the encryption, not Google or Yahoo. Surely Google is not proposing a system that prevents them from reading your email and serving you ads? Until we have something that actually prevents Google and Yahoo from getting the plaintext, none of the other problems matter that much.

The NSA isn't my concern, Google etc. are. I don't want to bother going to the lengths necessary to secure myself from the NSA since that just isn't practical. But it would be nice if google and its employees didn't have access to the plaintext of my email. If I send an email to anyone using gmail and they decrypt it in a way that lets google see my text when they reply, all of my own security steps are worthless.


Just a random thought - maybe there is a way to nail hard the point that "you cannot have security if you're lazy"? The society expects people to do driving licenses before getting behind the wheel. Why not expect people to put some amount of effort to be able to get mortgage or interact with court, etc.? Sure, many people will screw this up, but maybe this will be enough to secure majority.

</dream>

(confession: I myself am too lazy to use PGP)


I started this project https://github.com/abemassry/wsend-gpg and while it's not the easiest to use I'm sure it can be improved. There's no key exchange either but there has to be a quick and easy solution that people can use if we work on it a little more.


Counter example: I'm not too lazy to use HTTPS.

Maybe if email encryption was more like HTTPS more people would use it? Just transparent and easy.


> Maybe if email encryption was more like HTTPS more people would use it? Just transparent and easy.

Sure, but the point of a lot of comments here seems to be that It Can't Be Done.


It certainly can be done.

It's just that lots of people think HTTPS security is not good enough. (And you can include me on that set.)


PGP is about identity and privacy. We are not going to get that from Email. Email isn't worth fixing. Its time to move on.

In the last few years we have seen IM and SMS merge into an almost seamless experience. Surely we could engineer a UI that also copes with larger bodies of text at the same time?

We need clients or servers that are multi-protocol. That way we can experiment with new ways of communicating.


Good article. However if your adversary is a three or four letter agency then by all accounts it seems that PGP/GPG still does work. Snowden and Greenwald used it, apparently successfully after some tuition.

The article also doesn't mention Bitmessage, which addresses a lot of the concerns. Bitmessage isn't forward secret though.


Here is a good criticism of PGP from 1999 that explains why it isn't usable by ordinary folks - http://www.cs.berkeley.edu/~tygar/papers/Why_Johnny_Cant_Enc...


Here's a handy guide which addresses a couple of these problems:

https://help.riseup.net/en/security/message-security/openpgp...


Not mainstream ≠ suck.

Also, about “terrible mail client implementations”, — the problem is, to not be terrible for many is to be built-in to GMail (and work transparently there). The consequences of that are obvious I hope. So no, thanks.


This could perhaps be made easier to use if you had a UI like this: You phone pops up a message saying: "Hey, I notice you seem to be in the same room with Bob! We can increase security of Bob's messages to you my exchanging a fingerprint. Do this now? (Yes/No/Woah, Bob isn't here!)

If you click yes, you then exchange fingerprints using eg QR codes, and the authenticity of messages from Bob are retrospecively checked

Problem is, it's not obvious this can be done without compromising privacy of location.


> Problem is, it's not obvious this can be done without compromising privacy of location.

That's a problem for Free Software running on local machines!


> Adding forward secrecy to asynchronous offline email is a much bigger challenge, but fundamentally it's at least possible to some degree.

Is it really fundamentally possible? The author asserts this without really backing it with anything. I can understand how OTR-like systems can work between a static pair of clients, but it is not entirely clear if it is possible at all to extend such scheme to work in scenarios where message delivery is async and I might be using a set of clients/devices for messaging.


Matt links to a paper on forward-secure public key encryption in the notes. While this shows it is possible in principle, the actual procedure is pretty awkward, and probably not usable in this current state.


PGP needs to onboard themselves with Elliptic Curve Crypto... significantly smaller makes them more distributable which solves a few of the problems mentioned.


Most systems should switch from simple multiplicative group crypto to elliptic curve, but it's hard to make an argument that doing that would resolve any of the problems Matt is referring to.


It's in GNUPG 2.1, but it's been in beta forever. Also, at least my smart token can only do RSA. It's disappointing that it's taking this long, but it's not people are throwing money at the GNUPG team.


These are largely problems with email, not PGP - which btw is not just by email, in fact I almost never use it with email.

SMTP is not meant to be secure. You insist in communicating through an insecure channel-protocol and making it secure as an afterthought, and it's always going to be inconvenient or otherwise suck. I say PGP is pretty good at what it does, and it's nice in that it doesn't promise what it doesn't do.


> Now let's ignore the fact that you've just leaked your key request to an untrusted server via HTTP. This is a public Key, so secrecy it's not needed here, also he is providing the Fingerprint on another location, so if there was a MITM attack, it should happen on both twitter (HTTPS) and pgp.mit.edu


I think Matthew Green's point was more that requesting a public key leaks your intent to communicate with someone—the metadata, if you will—to an untrusted third-party.

Of course, e-mail headers, including From and To, must necessarily transit as cleartext, even when e-mail bodies are protected by PGP. The keyserver should perhaps be the least of Matthew's concern.


So... gpg + mixmaster remailers + Tor for http?


Can forward secrecy even work for emails, where you don't have a bidirectional communication channel? (Maybe the answer is "You have to build that bidirectional communication channel", but that means such a system can't simply use mail, it has to use mail plus X).


If we assume Alice and Bob use the keyserver network, and each have their own "master" key-pair that is mutually trusted, they can rotate public sub-keys quite frequently (you just need to search for any new keys before sending an email -- this is of course another (not smtp) channel -- but who doesn't use keyservers?).


keybase.io and the mailvelope browser plugin both do fine work in making PGP simple to use.

It isn't about being NSA-proof, its about having the volume of "Enveloped"/PGP encrypted emails be so high that it isn't possible to directly target everyone.


No perfect forward secrecy. If someone gets your PGP key, they get all of your messages (past / present / future) and you might not even know your key was compromised.


Of course you can use forward secrecy (PFS) and ephemeral keys with PGP/GPG/GnuPG. I've been doing that for ages. Any public / private key system is able to do that. Simply generate new key pairs and send the new signed public key when ever that's necessary. I've blogged about that several years ago, when someone claimed that it can't be done. You can freely select if you want to rotate keys on every message, daily or so.


So you're just going to keep 1000 private keys around to decrypt your old stuff?



I think https://lavaboom.com/en/ addresses most of the issues mentioned. Just because pushing for privacy (an abstract idea, difficult to measure the worth of - especially on the Internet) is hard doesn't mean we shouldn't do it. Encryption is one of the fews things we can rely on and we should be using it. PGP isn't a lost cause, we just need to make it easy use - this includes automating (to some degree) the key exchange. /I'm one of the founders of Lavaboom, happy to answer any questions/


I just hope that however google and yahoo implement PGP into their mail offerings, they do it in a way that cannot be intercepted by governments/bad guys.


but they are not allowed to tell you if it can be intercepted...


exactly, which is why they'd need to build it in a way that cannot be backdoored.

However, if the government comes to Yahoo/Google, and says: give us a backdoor.. how can we be sure it doesn't happen?


> even modern elliptic curve implementations still produce surprisingly large keys.

> Modern EC public keys are tiny.

Well, which is it?


I'm kind of sad the author didn't touch on key signing at all. The trust levels are basically meaningless. What does it mean to trust someone more than someone else? If doing a request to get someone's key exposes your social network, imagine what publicly signing someone's key does. Just some food for thought :)


trust levels used during signing represent quality of identity check. in simple terms: if you checked ID of the person that is "sig3", if guy just claims the name on internet than it's "sig1"

on the other hand, "owner trust" is a local concept which is not exported and used solely for trust-path verification


> Except maybe not: if you happen to do this with GnuPG 2.0.18 -- one version off from the very latest GnuPG -- the client won't actually bother to check the fingerprint of the received key.

Even in it's long form, it's relatively easy to generate different keys that have the same fingerprint.


I'm aware of simple brute-force attacks on short key IDs [0], which are just the last 32 bits of the fingerprint (e.g. 438CF0E2). With significant effort, one might be able to extend that to 64 bits.

I'd be much more surprised by a full fingerprint match. Wouldn't that imply a SHA-1 collision?

[0] http://www.asheesh.org/note/debian/short-key-ids-are-bad-new...


Yes I was referring to the 64bit long key ID. The full fingerprint is a SHA-1 and not vulnerable.

See https://www.debian-administration.org/users/dkg/weblog/105


The 64-bit has been done: I've seen it. 0000000000000001, I think?


using a bad hash function?


Yeah really, it's actually Pretty Good if you think about it.


Problem:

PGP is complicated (VERY complicated, to the average user), resulting in next to zero adoption.

Suggestion:

Simplify the goals in a way that can be upgraded at at some later date.

I think we need a browser plugin (All browsers. Other non-browser tools too, ideally, but the browser is important) that lets you securely SIGN posts locally in a style more or less like GPG's --clearsign option. Ideally, this should literally be --clearsign for compatibility, with the plugin hiding the "---- BEGIN PGP SIGNED MESSAGE ----" headers/footers, though these details are less important.

The key should be automagically generated, and stored locally in a secure way. (Bonus points for leting you use the keyrings in ~/.gnupg/ as an advanced, optional feature). The UI goal is to simply let people post things and click a sign this button next to a <textarea> or similar. Ideally, later on, this could become sign-by-default.

On the other side, the browser plugin should notice signed blocks of text and authenticate them. Pubkeys are saved locally (key pinning). What this provides is 1) verification that posts are actually by the same author, and 2) it proves that someone is the same author cross-domain (or as different accounts/usernames).

No attempt is made to tie the key to some external identity (though this would be somewhat easy for to prove). The idea is to remove the authentication problem (keyservers/pki) entirely. This can be man-in-the-middled, but the MitM would have to be working 100% of the time or the change in key will be noticed.

No attempt is made regarding encryption (hiding the message). This should also greatly simplify the interface.

The goal here is to get people using proper (LOCAL STORE ONLY) public/private keys. The UI should be little more than a [sign this] button that handles everything, and a <sig ok!> icon on the reading side. It should be possible to get the average user to understand and use such a tool.

Later, when the idea of signing your posts has become more widespread and many people have a valid public/private key pair already in use, other features can be added back in. As those "2nd generation" tools have a large pool of keys to draw from, it should be easier to start some variant of Web Of Trust. Even if that never happens, getting signing widespread is useful on its own.

I realize this doesn't protect against a large number of well-known attacks, and only offers mild protection against MitM. This is intentional, as the goal is getting people to actually use some minimal subset of PGP/GPG-like tools, possibly as an educational exercise. The rest of the stuff can be addressed later.


I've designed an API for this and started working on it since I got a Yubikey Neo in the past few weeks, if you or anyone is interested. I'm focusing on UX specifically.

My design has a REST endpoint that runs locally, and a JS client/SDK then connects to it on a known endpoint. My "MVP" version is to have the user running an interface with the request queue in a separate tab, and the server always at a fixed localhost:port endpoint. The client page would then issue a REST request that hangs until the user responds to the request in the separate tab.

This would conceivably work as a browser plugin as well since I'm planning to write it in JS for Node--basically the server logic would instead live in the plugin and the SDK would check for presence of the plugin before making a REST call. IMO the advantage of making it a REST endpoint though, is protecting private keys from whatever else might be going on in the browser process(es)--based on my own worst-case on assumptions, unsure if it's actually an issue with the plugin architectures.

I'm an aspiring interaction/UX designer so that is the aspect I am focusing on, and my motivation is that I am personally starting to use GPG with a Yubikey + offline master. So yea, collaborators hit me up, especially if you're in SF.


This article fails my smell test. The adolescent vocabulary doesn't correlate with the otherwise polished writing style and the technical merits fall far short of the proposed remediations. It is therefore likely to have been funded or otherwise inspired by the NSA in an attempt to smear PGP, still the most effective cryptography available to the average person.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: