I'm not saying I'd be the one to implement this, but at the vert least, I'd like to start collecting ideas. Maybe I or someone else could realize them eventually. So let's talk. Please post your thoughts on what would make for a good, user-friendly, and secure wrapper around GPG. Thoughts from security specialists would be especially appreciated.
I'll get the ball rolling with a few basic requirements:
* No roll-your-own crypto. Absolutely none. All algorithms must be provided by a mature, universally trusted library. (And those algorithms must of course be GPG, since that's the whole point of the project.)
* Don't use any libraries that, while sound, expose a low-level API such that we could unwittingly call the API in unsound ways. An example of this would be OpenSSL. (Just an example; obviously OpenSSL != GPG.) See this for a discussion of the library misuse problem: https://news.ycombinator.com/item?id=4779015
* Users should have to understand as little as possible about the inner workings of PGP/GPG. However, in any instance where hiding details would compromise security, details must not be hidden. For example, people need to understand the implications of signing someone's key. We don't hide that part from them. But they shouldn't have to fiddle with text files and command lines. We do hide that part.
* A "good user experience" is more than just a GUI. We already have GPG GUIs. User experience doesn't start when the user first boots the program. It starts at the moment a person first hears about GPG and wants to learn more. Thus, good UX is as much about documentation (including the product homepage) as it is about software.
When one suggests replacing OTR, one tends to get an earful about the importance of forward secrecy. I think forward secrecy is very important for systems in which there are extremely high-value keys that are "stationary targets". I think forward secrecy is less valuable in desktop applications, where the attacks that would cough up a persistent key would tend to be devastating to the whole cryptosystem anyways.
It's also worth saying that PGP isn't a particularly great cryptosystem. "Modern" PGP predates a lot of important stuff in crypto. But it's a very well studied cryptosystem.
There are strong cryptographers who are working on much, much better systems than PGP. The problem is that those systems will compete with amateur systems and the winner won't be chosen by security. At least with PGP, we know what we're getting.
Could you give us some examples of these? How far away from prime time usage do you estimate they are? Are any of them usable right now?
Some examples from PGP include Bob signing Ann's key without sufficient verification, or people publishing their private and public keys by accident.
Remembering that many people are just hopeless at security ('123456' used as passwords; people clicking through browser certificate warnings; people installing malware and ignoring OS warnings about untrusted sources) it seems a reasonable point to make: "Secure products can be made easier to use, and if they are both good and easy to use it will enhance security".
Is there anything available today that you'd recommend over PGP, regardless of usability or ubiquity? e.g., if one has to include crypto inside an internal-use only email product, that requires both encryption and/or signatures - what's an alternative to PGP that would be considered reliable?
You're right--that would be a major boost to usability. One question though: Does this undermine the security of PGP in terms of identity verification? I mean, if I'm receiving an ephemeral public key over the wire, how do I know it's not being generated by a man in the middle? With semi-permanent, published keys, I can put my trust in the signatures. But I'd imagine that the scheme you're proposing doesn't have signed keys. Or am I mistaken about that?
> It's also worth saying that PGP isn't a particularly great cryptosystem.
Do you feel the best move is to push forward with PGP, use something else now, or wait for newer systems to be better-studied?
I'm glad I found your comment, I just pushed something like what you just described to one of my repos yesterday. Hear me out, I'm not self promoting myself out of context here.
I'm currently working on an OpenPGP integration for the Roundcube webmail project and have so far added functionality from the OpenPGP.js library. The pros of this is of course usability and that no external applications are necessary, the cons are, amongst others, what you just wrote above.
To be able to support briding local GPG binaries and keyrings into graphical browsers without exposing any critical information I threw together an HTTPD which listens on the client's localhost. The concept is already proven to work, now it's a mere matter of implementation. It's based on the PyGPG library which wraps GPG into Python and is compatible with both Windows and *NIX systems as long as they can execute GPG and Python (which they can).
It's still a work in progress but currently supports key generation and key listing in response to HTTP requests. Through cross-origin resource sharing users can specify which domains should be allowed to speak to it in a simple text file separated by line breaks.
I wrote up more detailed info on the approach and usage here: https://github.com/qnrq/rc_openpgpjs/issues/64#issuecomment-...
Source code currently available here, although it will be separated to its own repository later: https://github.com/qnrq/rc_openpgpjs/blob/pygpghttpd/pygpght...
I can conclude that what you are requesting is actively being built and partially already exists but still needs to be put to use. Hope you don't view this as shameless advertising, because it's not. I'm only responding because your ideas are spot on what I pushed yesterday.
Any form of feedback is greatly appreciated.
I'm working on encryption and signing as well. Key generation would be nice as well. I need a little help with it though.
I don't mean to be harsh but server sided crypto is far from a good idea. It provices violent regimes, such as America, a technical ground to force hosts into backdooring their server sided crypto. Anything alike must be done on the client for safety and privacy to be ultimately achieved.
You should rethink that design strongly.
I'm also considering setting up some scripts for outbound mail - to automatically encrypt any (non-encrypted) mail I send if I've ever received encrypted mail from the recipient. Have the mail server keep a record of email addresses and public keys, and auto encrypt where possible.
While I'm reasonably sure GPG/encfs on my phone will reduce my exposure to "dragnet style, intercept and archive everything" surveillance, if the NSA are after _me_, I've no doubt that there are people at the NSA who've already worked out how to coerce Apple into pushing a software update to my phone that sniffs around with root access looking for things that look like private keys, and keylogs things that look like passphrases - and ships them all off to Utah.
(And, truth be told, I strongly suspect all my Windows and Mac OS X boxes would fall in exactly the same fashion, and it wouldn't surprise me too much to find the firmware in my bios or USB bridge or ethernet adaptor or hard drive on my linux boxes is equally traitorous and ready to "sell me out"…)
One of the factors which can narrow the scope of attackers is to use products like crypto stick, but then again what is preventing a computer from being rootkitted and having it's keys stolen as soon as they are exposed in the system?
The dilemma here is the same as with filesharing: if it's accessible it can be copied and transferred. There's no patch against that.
If you see encryption tools that others have written - and all you can imagine is implementing them in insecure ways, then that's your own issue.
1) the server provided the cryptographic libraries (so they may be compromised)
I don't know how the not yet standardized window.crypto will adress 2), but as of now you can't trust DOM level encryption.
But yes there is JS crypto in the project, as a planned separate optional driver.
To me, one of the most important things about PGP is that the plaintext and the encryption process are entirely in your control. (At least to the extent that you control your own computer.) You lose that assurance if you do server-side encryption.
1) you connect to server A
2) you want to encrypt sensitive informations. You send them to (localhost) B
3) you receive encrypted data
4) you use them through server A
No, the sensitive information isn't being protected from localhost but from server A and anything else on the path between user and message destination. localhost is the user. For clarification: GPG is on user's localhost, not the server.
1. Alice uses a web app served by server A
2. Alice wishes to send an encrypted message through the web app served by server A to Bob
3. Alice writes the message on her client sided browser
4. Alice finishes and clicks "Send"
6. pygpghttpd responds with the ciphertext to Alice's web browser
7. Alice's web browser replaces the cleartext content with the encrypted content
8. The encrypted content is sent to server A to be routed to Bob
1. Bob receives encrypted message from Alice on web app served by server 1
3. pygpghttpd responds with the decrypted message
4. The decrypted message is rendered for Bob
So either you're trusting the web app's js, or there's some other unspecified mechanism for ensuring that it's behaving in a trustworthy way.
Cryptocat (ignoring security for a moment) is an attempt to solve the accessibility problem in a way that actually works. Widespread adoption works well with some central aspects - and if we're talking about dragnet avoidance for most people then this is probably a reasonable compromise.
If Google generated public/private key pairs for each gmail user tied the keys to each account and then used the public keys to encrypt all email (taking a similar approach to using real OTR for google hangout) then all google->google communication would have a layer of protection from unwanted ISP monitoring. Granted you're still trusting google with the private key and warrants or some request could still reveal it, but you'd actually have wide scale use of the thing. Facebook could do something similar for their chat and then you'd have most of how people actually communicate covered across multiple devices in a way that protects against sweeping surveillance.
I'm pretty ignorant about most of this - am I missing something obvious that would prevent this from working? Obviously you're stil trusting the companies, but we were doing that anyway.
For instance, OTR (as implemented by IM clients) is an example of an extraordinarily simple cryptosystem (I'd argue too simple) that at least provides for a notion of persistent keys.
True, but that doesn't mean the convenient and insecure apps have to totally dominate the market. I'd be willing to bet there are plenty of people who want true, reliable security, and are willing to take on a little hassle for it.
> I've read that even journalists that face real risk communicating without pgp don't bother because it's difficult to participate in something that requires others to also participate.
The same could be said of the early days of email: It's only useful if other people are also using it. But we overcame that chicken and egg problem. The same could happen for PGP. It's a cost-benefit equation: People need to be sufficiently worried about privacy, and the friction of PGP needs to be sufficiently low. There must be some tipping point.
> Cryptocat...is probably a reasonable compromise.
Not if it can be cracked with the processing power of a mere desktop computer. I've heard claims to this effect. What are the use cases for a sort-of-secure app like Cryptocat? Certainly not hiding from sophisticated adversaries. So it's just about protecting your data from casual users then. But in that case, I'd argue that even your basic instant messaging client is adequately secure, in that a casual user doesn't know how to play man-in-the-middle.
> If Google generated public/private key pairs for each gmail user
I didn't mean to imply that cryptocat itself is a reasonable compromise (I don't know enough), but that a centralized system that implements encryption for its users might be which leads into your fourth point.
If you're collecting everything from everyone in perpetuity the risk for abuse I see is when someone becomes interesting to the government they can query their data set and use it against them.
I'd think there would be a way to use client side encryption by these companies to protect a user's data until they become 'interesting' at which point they'd have to use more fundamentally secure methods (which is how it currently is anyway).
Seems like the easiest way to get the most people generally protected from abuse.
Is using https enough? If that's the case then it seems like most communication would be protected anyway. I got the impression that this wasn't true - not because I don't trust google, but because of something else (it seemed google genuinely didn't give access to everything yet people made comments about no digital communication being secure: http://www.youtube.com/watch?v=vt9kRLrmrjc).
To give a concrete example, let's say Google adds JS-based PGP support to Gmail. Suppose that, in general, it works. Inasmuch as Gmail delivers properly encoded PGP messages to your recipients, and it can read PGP messages that are sent to you. But suppose further that Google is somehow compromised. Maybe through technical means, maybe through social engineering, maybe through legal pressure. And then a malicious JS payload is delivered to users, hidden somewhere deep in the page. This payload allows PGP messages to continue being sent and received. But it also backdoors you. Maybe by creating an alternate version of every message encrypted with the attacker's key.
Unfortunately, current clients are not at all equipped to detect if this is happening. For the browser to be able to participate in a truly secure crypto system, it would need to have the most critical parts built in, not provided by websites as JS.
> Is using https enough?
It's generally believed to be adequate for protecting against a man in the middle. It doesn't help you if your computer or the server is compromised. Whether you trust Google or not is your choice. The way I see it, every entity that stores data will eventually have abuse, a leak, or a breach. So if you're at peace with that risk, then HTTPS is enough.
However, it's still a minor pain to use for web-based email. You have to remember to select the entire body, right-click, Services, then select Encrypt. Not sure what can be done other than make it a browser extension, but the history on them isn't exactly stellar, security-wise.
Go to the GPG Tools homepage. It's kind of a mess of links, without a dead-obvious path for the absolute beginner. Should I click "Quickstart tutorial" or "introduction?" Or should I just download the installer, which is my first step for 90% of applications? And the experience doesn't get less confusing when you get past the home page. If anything, it gets more so.
GPG Tools strikes me as a project that is by hackers, for hackers. Nothing wrong with that. But it's very different from what I envision. I want a UX that holds my hand. It should be like a teacher, patiently guiding me through everything I need to know to use GPG.
Fortunately, the mental model for how GPG works isn't actually that complicated. I think most people can understand, for example, what key signing is, if it's explained well.
Apologies to the maintainers of the GPG Tools. Their work is admirable and greatly exceeds the whole lot of nothing I've contributed. I'm hoping this will be interpreted as constructive criticism.
> However, it's still a minor pain to use for web-based email.
I don't see this problem being solved without something implemented in native code. See:
There have been proposals to add a crypto API to browsers, where such API would be implemented in native code. I.e. you could call the API from JS, but the algorithms would all run in native code. I don't know if any of these proposals will go anywhere.
Conceivably, one could also just up and write C modules for popular browsers. But then you'd have to get those accepted by the browser makers.
Either of these solutions is beyond the scope of what I envision, at least for now.
It's unfortunate, because we could use a secure browser crypto interface much more than we could use better browser interoperability with random non-web technology. But our industry is, of course, fundamentally unserious about security.
That being said I think the real solution is OS level integration. Perhaps a facebook app to help grandma with web of trust.
Looking at other password / certificate mistakes (people use 123456 as a password for important accounts, people click through warnings) I think this might be harder than it seems.
See for example some of the hoax accounts on Twitter. People do get confused, even though it should be easy enough to spot the real account over the hoax account.
Online reputation and trust is important. It's a shame that Klout (totally irrelevant to this) is what most people think of when I say 'online reputation'.
GPG mail is a plug-in to Mall.app. My point is why not a similar browser plugin.
Yes, afraid so.
> I am talking about an actual application level plugin. Think flash, and silverlight not greasemonky.
This is absolutely an option. It's a tall order, because there are a lot of browser/OS combinations out there. But I believe it could be very successful.
"Paste and crypt"
paste text into field or mark it. hit encrypt. pop/menu/whatever lets you choose or import public keys.
"paste and decrypt"
basically the same funtionality backwards
If we someday have an ecosystem of C browser extensions, in-browser crypto may be much more promising. Or not. I suppose there could be other problems besides the language.
* Some kind of "sync" for people who hate dealing with files.
* Copying a single file by whatever means you see fit, such as a USB stick. Many people, such as myself, prefer the simplicity and transparency of a good old file.
Each of these has potential security pitfalls. Those would have to be thought out.
The concept of keypairs doesn't seem hard to me. And I think that can be abstracted away a bit anyway. You just need to know that there's this super-secret file (the private key) that you should never leak to anyone, and you need to sync it to your devices. So far, not so bad. As for the public key, I think the software can mostly just handle that for you. I.e. it can take care of uploading it to keyservers.
Signatures might be harder for people to understand. But here still, a good UI could help to abstract that away a bit. Imagine I can just click "get my key signed," enter an email address, and that's it on my end. No more steps for me to take. On my friend's side, it's just an email that comes in, probably with a link using the application's special protocol. My friend clicks the link, her PGP UI boots, and a yes/no pops up. Done.
So I don't think that understanding the mental model is the bottleneck right now. Rather, I think it's that the software and the accompanying documentation are not optimized for getting a naive user off the ground as fast as possible (without compromising security).
There's a real opportunity to build something much, much simpler on top of PGP. All you really have to do is pick some sensible defaults and automate a few steps. Look at how many nerds can't be bothered with encrypted communication, let alone normal people.
I face this daily, since I'm the go-to guy at my office for scripting/coding solutions for these folks. They just want it to work without having to learn or understand their decisions. And these are the same people that will spend hours figuring out complex lunch accounting issues or read volumes on video game strategies or rebuild engines.
Bob has to make sure that the public key he thinks is Ann's really truly is Ann's, and not Eve's.
That would be a hard problem to solve.
S/MIME works everywhere (Outlook, Mail.app, iOS, Thunderbird, BlackBerry, Windows Phone, Lotus, ... ) out of the box. No plugins required.
How many regular users do you know who actually edit their list of trusted CA's in their browsers? (I sure don't, though I probably should.) Who would manually remove DigiNotar immediately because they heard on the news they got hacked? No, Big Well-Designed Site is signed by Big Company, user trusts it.
On the other hand, if I give you a key that's signed by someone you trust, you can make an informed decision on whether to trust my key. It is a decision on a level where the regular user might feel they have something to say (whereas a regular user is not likely to feel they know more about security than Big Company).
Perhaps most users would have very few keys that they trust/verify. But I'd say that's a good thing, because if you haven't gotten real verification, it's just a false sense of security.
I'd be curious to hear from security specialists about S/MIME. How thoroughly studied is it? How are the libraries? I have hardly ever heard it discussed, so I'm a little hesitant at the moment.
We need no software. We need public awareness, tutorials and probably some easy to use CA.
I bet the adoption rate of S/MIME is way beyond gpg if you check corporations and large enterprises.
for example: Germany
Do you know if S/MIME can work on a distributed model?
Also, what are the advantages of S/MIME over PGP? I hear what you're saying about enterprise adoption, but I'm more concerned with the thoroughness of peer review than usage rates.
S/MIME is based on X.509, same thing that powers TLS/SSL.
Depending on the client you're using, it shouldnt be too hard to prune the trusted CA list to only include providers you choose to trust. If you want, only include your CA and remove all others.
But this would be useful in a corporation where it's possible to centrally manage CA lists for approved applications.
What I'm trying to say here is: I'm a bit of a geek, and if I don't understand how it works, there's no way e.g. my parents are. If s/mime is the solution, there's a serious education battle that needs to be fought.
I say this as someone who uses GPG (via GPG Tools on OS X) without any bother.
It's even easier to do, because you don't have to trick a CA in creating a duplicate key.
In some ways it's easier with PGP, yes.
But in some ways relying on a possibly-hostile CA is worse: if the software doesn't really give the user any visibility of key changes, then the impersonator won't even need to social-engineer the recipient with "whoops I lost my key". Instead, the duped recipient will just see "Signed by Big Trusted CA" with a shiny green padlock, and will think everything is fine, even though the key under the hood has changed.
It seems as though many of the web-of-trust issues that impeded PGP 15+ years ago could be helped by current day social networking practices, if a social network pushed it. PGP/GPG could be used under the hood, as long as the user never has to deal with an actual file anywhere unless they wanted to.
The consequences of evil twin attacks  may be worse, but if the 'verify' action was not as casual as mere friending, then perhaps it would be less susceptible.
Are any startups working from this angle?
I have a very rudimentary prototype up on Github if anyone is interested. It has some throw away keys and allows you to encrypt for those via right-clicking text in a textarea. The code uses OpenPGP.js.
A very small (but rather important, depending on occasion/readership) detail:
gpg --encrypt --sign --armor -r recipient@email -r email@example.com filename
-r recipient Specifies recipients of the message. You must already have private keys of the people listed. [...]
* Get Thunderbird and Enigmail.
* Use Enigmail to generate your keypair
* Upload your public key to the keyserver (via the GUI)
* Proceed to use email.
A great tutorial, however. Very accessible in my opinion and considering it's purpose my previous paragraph is more of an aside.
Still, I suppose it's possible for an adversary to work around this as well. If you can find enough people who are 1) willing to falsely sign a key, and 2) trusted by others, you can have these people sign a spoofed key. But then these people would be putting their reputations on the line, and the probability of being exposed is high. Thus the cost of the attack is high.
The lesson being: If you're emailing info that is valuable enough to warrant such a costly attack, verify the key through some other means. Meet the message recipient in person, for example. And consider a thorough security audit of everything in your digital and physical life. You're obviously operating in a far more dangerous world than I do. There are probably many vulnerabilities available to attackers that have nothing to do with your email.
My warning was truly an aside, and given the nature of a large group of visitors, of course a handful might not follow best practices and verify the signatures, etc.
Edit: nevermind, i see you meant the site with the tutorial, not key exchange
This should be ubiquitous, and easy to set up for everyone, which it isn't, nor are any of the numerous Outlook plugins either.
gpg --armor --export --sign <email>
I changed to
gpg --armor --export <email>
and it works. Just pointing this out as a typo.
Is that still the case? Was it ever? I don't know enough about PGP to know, unfortunately.
Though I'm not sure why the author focuses on non-threats like known plain text attacks, which gpg isn't vulnerable to, and not these issues.
But what's holding me back is webmail.
I don't use the web interface often, but it has proven to be absolutely crucial to be able to get some important mail (boarding pass, mail explaining how to get somewhere etc.) from any computer.
I don't know of such an application, or whether the approach is rigorous, I'm afraid. But I think that's the shape of the solution.
First, I'm not remotely interested in running my own mail infrastructur anymore. Been there, done that. Today it's much too hard to get mails accepted by others.
But more important: iPads don't have an USB connector, my mobile phone doesn't have one. Friends have Macs, in other places there might be other crippled devices.
The web is a universal building block. USB sticks are not.
They could be compromised, but they claim all data is encrypted locally before being sent to their servers. I have not verified that claim.
Relies upon trusting last pass and trusting the iPad of course, both of which are questionable.
Everything that requires some special software to run is a non-starter.