Matt is wrong about this. He's being victimized by a pernicious fallacy.
It certainly appears that the most "successful" cryptosystems have transparent keying. But that's belied by the fact that, with a very few exceptions (that probably prove the rule), cryptosystems aren't directly attacked by most adversaries... except the global adversary.
In the absence of routine attacks targeting cryptography, it's easy to believe that systems that don't annoy their users with identity management are superior to those that do. They do indeed have an advantage in deployability! But they have no security advantage. We'll probably find out someday soon, as more disclosures hit the press, that they were a serious liability.
There is a lot wrong with PGP! It is reasonable to want it to die. But PGP is the only trustworthy mainstream cryptosystem we have; I mean, literally, I think it might be the only one.
Moreover, I would argue that a 'translucent' key management infrastructure /can/ be better in all ways than PGP. For example, imagine that Google provided a transparent key distribution service for all its users, but also allowed you to verify key fingerprints manually before sending messages. Congratulations -- for users who care, you've got something that works every bit as well as PGP. Everyone else isn't sending plaintext! Sure an attacker can compromise them, but it requires an expensive MITM attack. They have to be targets a priori, not after the fact. I'm struggling to see how anyone is worse off here, except through the nebulous reasoning that 'making things easy' makes people careless. Making things hard definitely makes people careless -- I've seen this firsthand.
But more to the point, even paranoid users have a lot of options that are better than PGP. Using ZRTP to establish secure channels is a very safe way to do things, assuming your attacker can't really forge voiceprints (and this seems hard, even for the NSA). From that point you can push strong public keys out to a dedicated text/email app. That we don't do this is not so much because it's a bad idea -- it's because so far people haven't tried it.
Deployability is worth more than might be immediately obvious. The question of deployment is directly related to the business of availability. If 99% of people used a crypto system that provided no real security beyond thwarting a global adversary, which also had the option for further measures to confirm identity when necessary, we would be in a much better position than we are now because it would mean that when I do have contacts I would prefer to use encryption with the software will already be on their system and I only need to instruct them in how to use it.
There's a fallacy in saying that no measures are better than imperfect measures 'because then you don't have a false sense of security'. Regardless of security measures or not the false sense of security is baked into how people use computers.
I challenge you to go explain to anyone you know who does not identify as a 'computer person' how email routing works and why it's quite possible for a 3rd party to read their mail. If they even understand what you're talking about, the response is an almost universally cool 'Oh but so many other people do it, and what am I supposed to do stop using email?'
The vast majority of people will not stop talking just because in the abstract they might be overheard.
EDIT: As a note, I think any centralized system of key distribution is fundamentally insecure.
The way I did this in a past life:
- Explain that email is a postcard.
- Give examples of information that should not belong in email.
- Provide demonstration of the issues people creates for themselves by breaking the rules.
- provide tools to accomplish business needs (file transfer in particular)
- Tell them that their email is subject to audit.
- Implement technical controls to warn/enforce suspect behavior.
People want to do the right thing, but you must set clear expectations so they know what to do! The most secure email is one that is never sent.
You can however get them to consider what they put in there by explaining "email is like a postcard; everyone who touches it can read it".
I usually use the "It's like mailing a postcard" comparison.
There are a precious few individuals for whom I have that level of trust in their management of their private key. I could not even trust my wife to manage a hardware key that I gave her, it would fall apart immediately; "I cannot use this key on my chrome book? I cannot use this key on my Galaxy? I cannot use this key on my iPad? Give me a soft key that I can use, or a cloud service..."
Therefore, PGP is not mainstream. There is a large population of people doing it incorrectly, and they must because they have no other real choice.
* PGP is only trustworthy if both parties treat key management with the utmost severity.
* Transparent key management systems that rely entirely on heuristics and click-through warnings are trustworthy.
So there's just a dozen or so central authorities who need to handle keys with the utmost severity.
Also the things that break transparently-keyed systems do so repeatedly. That's what transparent keying means: it's mediated by machines, and factored the slow, clumsy, human interactions out. CT? Audit logs? It's like Lucy and Charlie Brown with the football, except Lucy is hooked up to a for() loop.
So, you found out after the fact that the
global adversary injected themselves in
the middle of your conversation with a
source. What do you do now?
Each chart would be one page. It would walk a user through one step of using a PGP / GPG. It would include links to in-depth reading.
The final sheet would be walking through common mistakes that users make.
This is one of the things I'd pay for if I had FU money. I'm sort of tempted to kickstart the idea.
The only outcome of such a chart would be reams of people utterly vulnerable and thinking that they are not.
If you constrain the environment, e.g. "How to use SSL certificates in Chrome on Mac OS X Lion" then there might be a chance that could fit on one page in an easy to understand format.
I have a master key that was created offline, and use a subkey on a usb smart token. It works good but it was a bitch to set up. Apparently Qubes OS has a hardware-virtualized PGP container for protecting private keys, but that's not a viable solution right now.
The OP's complaints boil down to two IMO: 1) poor UX; 2) trust is difficult to manage.
1) Seems solvable to me again with subkeys for encryption and signing (which I believe are created by default for GPG2.0 anyway).
2) Asking technology to solve the problem which only each individual can answer ("do I trust this person to be evaluate other people's identities as carefully as me?") is not doable.
The example of someone getting a journalist's pubkey and being stung by the (fixed) bug whereby the wrong key may be imported is again letting the user off the hook. If someone is not actually using GnuPG's abilities to examine the WoT and see who has signed the journalist's key etc then it's an example of magical thinking with "encryption" replacing any other nostrum.
I don't believe that problem can be solved with technology.
All security decisions boil down to some calculus of risk, impact and cost. The most sensitive conversations that my wife and I typically have remotely aren't ones that justify the cost (both in terms of hassle and $) of carrying a secure device around.
Personally, if I were in a situation where I was remote and my physical safety or livelihood could be compromised from an email, my wife and I would probably suck it up and run around with secured netbooks or something. In my case, I don't see that risk/impact calculation adding up to requiring PGP.
Yes, transparent key systems would likely be less secure than PGP. If the usability were significantly better and people used them, that is better than the alternative of using nothing. For many of these solutions, there is a window of vulnerability surrounding the key exchange that closes if you aren't snooping traffic at that moment, so it's not like they're completely insecure options; just that their attack vectors that may be considered acceptable risks in many situations.
I don't think bad crypto makes the global adversary go "aw, shit, we better target someone else". I think it makes them go "excellent, something else we can get a secret appropriation to go break".
You as an individual cannot fight a state actor. It doesn't matter how secure your crypto is; they can hold a gun to your head and force you to give up the key (or in more civilized countries, throw you in prison forever). If you become an individual target to a state actor, there is literally nothing you can do to stop them unless another state actor is willing to protect you: they have the resources of an entire economy behind them and there's no security solution you can cobble together that will be able to keep them out.
Even Snowden, who practices a paranoid level of OpSec, just assumes his electronic communications are being read. The only reason the CIA hasn't done an extrajuducial rendition on him is that he is living under the protection of another state actor (Russia).
And what really bothers me is that this will give people a false sense of security. At least right now I'm seeing regular folks refraining from exposing sensitive info online out of fear of evil hackers that are often the subject of news. So yes, I think unencrypted email is better than a solution that isn't secure.
Of course for that use-case you don't really need end-to-end encryption: an encrypted connection to the IM server would be fine, and maybe actually better. But a bunch of services don't support that (though Google Talk does).
I trust my government (within reason) at the moment but I'm not comfortable betting that they will never ever turn anti-gay and start coming after me.
That said, crypto is useful in avoiding their gaze in the first place. For this, vulnerable crypto is better than nothing: assuming the vulnerable crypto requires a non-trivial and non-repeatable process to break, it's unlikely that even a state actor is going to bother breaking it for the entire population.
I'm not entirely sure that I agree. I've long thought that using encryption above and beyond what the average person employs would be a great way to appear on 'their' radar. I don't have the need, so I'm happy not trying to find out. That said, if everyone had strong encryption enabled by default, no one would stand out, which I support.
Widely adapted open-source "bad crypto" can:
- Raise user's level of awareness about security.
- Create a market that can later be serviced by better crypto.
- Encourage creation of infrastructure that can be later used in better crypto.
- Encourage investigation of better UI.
Also, other poster are right about increasing the difficulty of executing an attack.
You know that study that showed that people wearing seatbelts drive more recklessly, effectively exactly compensating for the increase in safety provided by the seatbelts?
I have a feeling people who think they're using a secure cryptosystem will speak much more freely than those who don't--meaning that the net effect of convincing users to use "bad crypto" is giving the global passive adversary† more interesting morning reading.
† Do we have a name for this guy in cryptography placeholder terms yet? Nathan (the NSA agent), maybe?
Err, the first part of that statement may be true, but I don't think anyone ever put forth a claim backed up by data that they cancelled eachother out. People may drive more recklessly with seatbelts, but the law undoubtedly saved a lot of lives. You may be thinking of motorcycle helmet laws; which seems more plausible given that high-speed motorcycle accidents are much more likely to be fatal regardless.
As for the "global passive adversary", I tend to side with the term "state actor" since the only entities with the power to collect data on that scale are governments as they can legally force every telecom company in their jurisdiction to install taps while keeping their existence classified.
You're probably right about the risk compensation here; with the caveat that anyone who rises above the level of a "common criminal" would simply not trust online communication at all. Islamist terror groups use a known-courier system for all planning and communication because they just assume all electronic communications are being monitored.
I'd rather have that than the society where everyone silently accepts global surveillance as a norm and treats any kind of self-expression that gets you into trouble as pure stupidity on the part of the speaker (e.g. blames the victim).
What we have right now is a vicious cycle. "Normal" people have no cryptographic capabilities. So they simply adapt their beliefs and behavior to this reality. This makes them think of themselves as different from people who do have cryptographic capabilities. This means they develop "us and them" mentality and can no longer empathize with anyone seeking any level of digital privacy. This means there is no popular support for crypto. So "normal" people have no cryptographic capabilities.
I believe that "consumer cryptography", even if it's weak, would break this cycle.
When I ask them to verify their pgp key, it's less easy.
How could the verification of the key be built into the address they give me? Something DNSSEC based I guess.
And you can also get more than one domain, in more than one TLD. Not very practical for automatic verification, but for surviving a manual verification several governments would have to collude against you.
I still think that there must be something better. But I'm probably not good enough to create it.
Also, DNSSEC isn't much more secure than our current CA system.
Do you never leave your collected business cards unattended at a conference or trade fair? Possible, if you put them into your shirt pocket.
Do you store them in a vault lomg-term? Probably not.
Is it impossible to impersonate you, either with a human sound-alike or by voice generation software?
If you want perfect security against everyone, it quickly spirals out of control. You should probably remove the wallpapers in your house regularly and inspect what's underneath. ;-)
I'm not entirely serious here, but I'm surprised at the optimism about what the individual can possibly achieve.
You are missing the part where it was suggested that the recipient of the business card telephones you and asks to verify the fingerprint.
Sorry Tomte for not replying immediately to your message, but I've posted too much on this apparently.
You could do that by printing it on a business card or reading it over the phone, and then the other guy is going to have to type it in somewhere.
The reason trusted third party keeps on coming up, despite all the myriad fundamental problems, is exactly because slinging that around is so unattractive.
PGP is a tool that requires expertise to operate effectively. That isn't good or bad, it just is.
In my professional life, I've provided communications solutions to a variety of true VIPs in key executive and other roles. The advice that their counsel gave on most occasions was simple: exchange secrets orally, and in person.
That means don't use the courthouse wifi, don't conduct critical deliberations via email or public telephone networks. If you google around, you can see plenty of referenced to US state governors only communicating via Blackberry text or obscure phone lines, mostly to control short term leaks and keep deliberations out of the public record.
The point of all of this is that keeping secrets is hard, and if you must exchange them electronically, you need a strong set operational protocols to keep those secrets.
You're probably talking about the PKI here. However, after a year's worth of Snowden leaks (and perhaps other leakers too) there have been zero documents discussing routine or even occasional sabotage of the PKI.
You suggest that we'll "probably" find out "someday soon" that only PGP works and everything else sucks, but we already went through that acid test. PGP was such an epic failure Snowden and Greenwald failed to connect entirely, and there were no big reveals about certificate authorities.
That doesn't mean the CA system is infallible, just that attacking endpoint security is easier. But as Matt's GPG example shows, GPG endpoint security is just as pathetic. Heck I didn't realise that GPG couldn't safely import public keys by fingerprint. How the hell does software like that, which has been around so long, fail to do such a basic check? QUANTUM would have made mincemeat of anyone trying to communicate securely using mainstream PGP implementations, whereas most S/MIME implementations I know of wouldn't have been fooled so easily.
Hand-waving about how anything other than PGP is trustworthy doesn't fly with me: there's too much real world evidence from real world adversaries that it sucks and other systems work better.
I would love to hear input from HN readers...
I personally think there is a good middle ground where identity management is invisible to most users and customizable by users with more challenging threat models. That is what we are aiming for.
Off topic and pedantic but exceptions do not prove rules.
A specific exception like “you are allowed to do X when Y is true” may be used as proof of an (unwritten) rule about X being forbidden.
I.e. here we use the exception (when Y is true we are allowed to do X) to prove that there is a general rule saying that X is forbidden.
"The exception [that] proves the rule" means that the presence of an exception applying to a specific case establishes ("proves") that a general rule exists. For example, a sign that says "parking prohibited on Sundays" (the exception) "proves" that parking is allowed on the other six days of the week (the rule). A more explicit phrasing might be "The exception that proves the existence of the rule."
But for some reason (maybe because it's generally less life-threatening), people seem to expect deeply complex subjects, like e-mail encryption and identity management, to be easy. "Yeah, if you can just give me a fancy, easy-to-use GUI with forward secrecy, that'd be great!" Sure, it'd be great. But it's not going to happen. And that's not because PGP is broken -- of course, it does have its weak points. It's because people are too lazy to bother to learn.
What's the old addage? You can have quick, cheap and reliable. Pick two? Same here. You can have secure, easy to use, and reliable. Pick two.
I seemingly can't develop the the muscle memory of unintuitive (to me) concepts like "clockwise is right" and "counter-clockwise is left", nor can I get used to the way a gas pedal actuates non-linearly. These are just two examples of a long list of problems that I have with the controls.
Then there is the utterly confusing signage.
I just can't do any of it, not without sweating like a pig. And I definitely can't be doing all of it at the same time. That's just nuts.
Every time I pick up a PS3 controller I have to learn to use it again, which depending on my withdrawal period can take anywhere from a couple of minutes to like half an hour. The only reason I can touch-type is because I'm doing it every day.
Please don't make the assumption that other's experience of the man-made world around us is in any way similar to yours, that's just not true.
Oh, and I have had absolutely no problem figuring out PGP encryption usage.
"Please don't make the assumption that other's experience of the man-made world around us is in any way similar to yours, that's just not true." Where did s/he? I'm genuinely stumped.
Learning to drive a car IS inherently hard (as in complex), just as "e-mail encryption and identity management", and that is a fact. If you for some reason are more or less adept than the average person at either of these things, I don't see what difference that makes to the reality of the situation. Like driverdan said, if you simply can't do something, you'll have to find a workaround.
But here's a true, I swear, you can probably check that it is, story just for you:
I didn't know how the whole army thing works when my time came. Just wasn't ever interested. Didn't know my sergeant from my brigadier.
The army took me seeing that I'm fit, for certain values of fit. Put me through boot-camp. That's when I landed in military jail for the first time. I could take everything that was going on in there only with a dose of humour, but grinning 24/7 was apparently not acceptable behaviour. But that wasn't what got me in jail.
There was one thing I could not take, absolutely. Still can't. There wasn't a moment to myself, I couldn't ever get alone in there. I had to always be accounted for, from their point of view; but from mine I couldn't find a place or the time to take a short meditation. I don't know what I have, but I've been getting through it all my life with meditation, and once that wasn't available I was heavily depressed. I thought of suicide, I talked of suicide, and that's basically all I ever talked or thought about. While grinning at anything they had to say to me in return.
So I went home. Took my stuff and went out the gate.
Later came back and went to military jail for a sentence. Then for boot-camp number two, as I didn't finish one.
But later when I did finish it on my second attempt, they didn't want me anywhere near a base anymore. They wanted me out of base for most of the time. They way to achieve this in the army is to make you a driver. This way you're driving around, not being in the base, problem solved.
If you read my previous comment, you know what the problem with that approach is. They didn't. So I explained, repeatedly. Any time they'd let me see an officer that was in charge of that kind of thing, I'd explain that I can't drive, won't ever be able to, and not even torture can "change my mind".
Either they have decided to test that last bit empirically, or just couldn't wrap their heads around the idea of someone not being able to do something that "any idiot could"; but long story short I've done 7 months of prison time in three separate terms over the span of 1.5 years before they saw me as unfit for service and let me go, and be as I am.
That is to say, you're not always in a position to find a workaround, if I may refer to your closing sentence.
Neither of these obtains with cryptography. Mistakes are not obvious and you have to concentrate to get it right.
Learning to fly a plane is much harder than learning to drive a car, and almost no-one learns how to fly a plane because it just isn't a useful skill for most people.
I did spend time learning all about PGP, and I wish I hadn't bothered, as the skill of learning PGP has zero value to me. On the other hand, learning to drive a car, which took longer, is much more useful.
One thing I have learned watching the crypto forums over the years is that there are well calculated misinformation campaigns trying to dissuade people from using secure methods. I see it again and again and the people on this forum need to think carefully before swallowing this as sincere.
I would never never never trust a solution from Google or any large American corporation. They have just been caught lying about prism (Google) and taking bribes (RSA). These companies are now and always will be totally untrustworthy.
used to provide the fingerprints that are readable? Verifying would be much more convenient than now.
"For example, the 128-bit key of:
CCAC 2AED 5910 56BE 4F90 FD44 1C53 4766
RASH BUSH MILK LOOK BAD BRIM AVID GAFF BAIT ROT POD LOVE
TROD MUTE TAIL WARM CHAR KONG HAAG CITY BORE O TEAL AWL
EFF8 1F9B FBC6 5350 920C DD74 16DE 8009"
[edit: See also this thread: http://lists.gnupg.org/pipermail/gnupg-devel/2001-March/0170...
I think they're missing (what I think was) the point of the list -- I seem to recall this was meant for use over the phone, and the words were selected by machine learning to all sound different. I've always thought this was a massive case of over-engineering -- and also somewhat narrow-sighted. I mean the list starts out with "Aardvark" of all things! ]
While the CA-model seems to be broken in most X.509 use cases, like TLS/SSL, where a duplicate certifcate can be used to do a man-in-the-middle-attack, this does not really affect S/MIME, especially after both parties started a "conversion". People that need to communicate "really" secure, should therefore be able to ignore all "CA-Trust" and white-list certificates on a per user basis (e.g. like PGP).
Ordinary communication still can by default fall-back to the existing CA-model to keep it usable (but not secure).
1. We need more love by the MUA-vendors, who mostly support S/MIME but it's still a PITA to use. Google e.g. still does not support S/MIME on android, see https://code.google.com/p/android/issues/detail?id=34374
2. We need CAs that are usable. StartSSL is nice and free, but it's not easy to use. Lower the entry barrier for getting and renewing/recreation of certificates
3. (most important) Make it easy to manage local CA-trust. On each new system, the user should be able to select a "trust no CA/whitelist only" approach and then be responsible for trusting other parties. No vendor (Microsoft, Apple, Google, Mozilla) should silently distribute and trust new CAs without users consent.
OTR's big latency is the initial handshake. After that, you can persist the session. But email is intrinsically a high latency medium anyway! We can afford 1 or 2 days delay to setup an initial encrypted connection. In fact, we can display a big "not encrypted!" message to users, while still letting them exchange email, until we've done the handshake and socialist millionaire protocol (or verified keys by some other means) setup.
I am willing to bet like 70-80% of people who send email to each other physically have their email clients online at the time they do it, even if they take a lot longer to answer - especially with the number of smartphones out there. So we can setup an OTR session after 1 message the vast majority of the time, and then reuse the same session as much as possible.
For now, the only reasonably usable secure key exchange method seems to be what WhisperSystems are doing on their phone app (safe against MITM if the parties know each other, and very hard to MITM even if not - especially not automatically).
DNS does not address it, and even DNSSEC does not (if you can forge the certificate, and you can mitm the traffic - which state actors are all capable of - then it doesn't matter that you can't forge the DNS response itself).
Actually, it should be the other way around: dnschain  bridges DNS resolution and namecoin, so there's no need to modify existing software.
They can be generated and installed into the OS keystore by your browser automatically. By the low standards of crypto it works pretty well. Any old email client supports it out of the box.
Also on Chrome at least certificate pinning should prevent that particular scenario.
As to the whois thing, what is stopping me from hijacking a domain, changing the whois and then generating keys? The webadmin might never even know. You don't even need access to their email.
Or to put it more realistically: What is stopping the NSA from pressuring a domain registrar into altering the whois for a brief period in order to generate MITM keys?
Why? Last I heard, breaking PGP was equivalent to being able to factor large integers into a product of prime numbers. So, NSA is able to do that, and no one else can, no one in the public heard about it, no university research mathematician published about it, NSA has mathematicians who figured out how to do that but their major profs back in grad school don't know how, no one got a Fields Medal for it, etc.? I don't believe that.
What's going on here?
He means I need a Faraday cage? Okay, tell the NSA I have one; put it in place this afternoon.
He means the NSA has trained cockroaches that can wiggle into my hard drives while I sleep and steal all my data? If so, then fine. I'll spray bug killer.
Otherwise, why should I believe that the NSA could crack my PGP encrypted e-mail?
We really do need to let users manage trust, because trust is a rich concept. And humans are actually really good at trust, because we've been thriving and competing with each other in complex social situations for a long time.
The trick is finding ways to recruit people's evolved trust behaviors into an electronic context. That is, can we build meaningful webs of trust through repeated social interactions, just like in real life?
So it's not the mail client vendors who are best positioned to solve the problem, it's the social networks.
(Whether they want to solve the problem is a separate question.)
I like the email model such that anyone can install and run an email server. I'd actively push friends, family and colleagues to use a decentralised email replacement that was as easy to use and secure as TextSecure.
The billing criticism is fair and warranted; currently if your sending over SMS, the first message can only contain 60 chars due to protocol overhead, so you often end up with short messages costing multiple SMS.
There is a way to verify keys (manually!) but no indication that you have verified them.
The NSA isn't my concern, Google etc. are. I don't want to bother going to the lengths necessary to secure myself from the NSA since that just isn't practical. But it would be nice if google and its employees didn't have access to the plaintext of my email. If I send an email to anyone using gmail and they decrypt it in a way that lets google see my text when they reply, all of my own security steps are worthless.
(confession: I myself am too lazy to use PGP)
Maybe if email encryption was more like HTTPS more people would use it? Just transparent and easy.
Sure, but the point of a lot of comments here seems to be that It Can't Be Done.
It's just that lots of people think HTTPS security is not good enough. (And you can include me on that set.)
In the last few years we have seen IM and SMS merge into an almost seamless experience. Surely we could engineer a UI that also copes with larger bodies of text at the same time?
We need clients or servers that are multi-protocol. That way we can experiment with new ways of communicating.
The article also doesn't mention Bitmessage, which addresses a lot of the concerns. Bitmessage isn't forward secret though.
Also, about “terrible mail client implementations”, — the problem is, to not be terrible for many is to be built-in to GMail (and work transparently there). The consequences of that are obvious I hope. So no, thanks.
If you click yes, you then exchange fingerprints using eg QR codes, and the authenticity of messages from Bob are retrospecively checked
Problem is, it's not obvious this can be done without compromising privacy of location.
That's a problem for Free Software running on local machines!
Is it really fundamentally possible? The author asserts this without really backing it with anything. I can understand how OTR-like systems can work between a static pair of clients, but it is not entirely clear if it is possible at all to extend such scheme to work in scenarios where message delivery is async and I might be using a set of clients/devices for messaging.
SMTP is not meant to be secure. You insist in communicating through an insecure channel-protocol and making it secure as an afterthought, and it's always going to be inconvenient or otherwise suck. I say PGP is pretty good at what it does, and it's nice in that it doesn't promise what it doesn't do.
Of course, e-mail headers, including From and To, must necessarily transit as cleartext, even when e-mail bodies are protected by PGP. The keyserver should perhaps be the least of Matthew's concern.
It isn't about being NSA-proof, its about having the volume of "Enveloped"/PGP encrypted emails be so high that it isn't possible to directly target everyone.
However, if the government comes to Yahoo/Google, and says: give us a backdoor.. how can we be sure it doesn't happen?
> Modern EC public keys are tiny.
Well, which is it?
on the other hand, "owner trust" is a local concept which is not exported and used solely for trust-path verification
Even in it's long form, it's relatively easy to generate different keys that have the same fingerprint.
I'd be much more surprised by a full fingerprint match. Wouldn't that imply a SHA-1 collision?
PGP is complicated (VERY complicated, to the average user), resulting in next to zero adoption.
Simplify the goals in a way that can be upgraded at at some later date.
I think we need a browser plugin (All browsers. Other non-browser tools too, ideally, but the browser is important) that lets you securely SIGN posts locally in a style more or less like GPG's --clearsign option. Ideally, this should literally be --clearsign for compatibility, with the plugin hiding the "---- BEGIN PGP SIGNED MESSAGE ----" headers/footers, though these details are less important.
The key should be automagically generated, and stored locally in a secure way. (Bonus points for leting you use the keyrings in ~/.gnupg/ as an advanced, optional feature). The UI goal is to simply let people post things and click a sign this button next to a <textarea> or similar. Ideally, later on, this could become sign-by-default.
On the other side, the browser plugin should notice signed blocks of text and authenticate them. Pubkeys are saved locally (key pinning). What this provides is 1) verification that posts are actually by the same author, and 2) it proves that someone is the same author cross-domain (or as different accounts/usernames).
No attempt is made to tie the key to some external identity (though this would be somewhat easy for to prove). The idea is to remove the authentication problem (keyservers/pki) entirely. This can be man-in-the-middled, but the MitM would have to be working 100% of the time or the change in key will be noticed.
No attempt is made regarding encryption (hiding the message). This should also greatly simplify the interface.
The goal here is to get people using proper (LOCAL STORE ONLY) public/private keys. The UI should be little more than a [sign this] button that handles everything, and a <sig ok!> icon on the reading side. It should be possible to get the average user to understand and use such a tool.
Later, when the idea of signing your posts has become more widespread and many people have a valid public/private key pair already in use, other features can be added back in. As those "2nd generation" tools have a large pool of keys to draw from, it should be easier to start some variant of Web Of Trust. Even if that never happens, getting signing widespread is useful on its own.
I realize this doesn't protect against a large number of well-known attacks, and only offers mild protection against MitM. This is intentional, as the goal is getting people to actually use some minimal subset of PGP/GPG-like tools, possibly as an educational exercise. The rest of the stuff can be addressed later.
My design has a REST endpoint that runs locally, and a JS client/SDK then connects to it on a known endpoint. My "MVP" version is to have the user running an interface with the request queue in a separate tab, and the server always at a fixed localhost:port endpoint. The client page would then issue a REST request that hangs until the user responds to the request in the separate tab.
This would conceivably work as a browser plugin as well since I'm planning to write it in JS for Node--basically the server logic would instead live in the plugin and the SDK would check for presence of the plugin before making a REST call. IMO the advantage of making it a REST endpoint though, is protecting private keys from whatever else might be going on in the browser process(es)--based on my own worst-case on assumptions, unsure if it's actually an issue with the plugin architectures.
I'm an aspiring interaction/UX designer so that is the aspect I am focusing on, and my motivation is that I am personally starting to use GPG with a Yubikey + offline master. So yea, collaborators hit me up, especially if you're in SF.