Hacker News new | past | comments | ask | show | jobs | submit login
Why Johnny Still Can't Encrypt: Evaluating the Usability of a Modern PGP Client (arxiv.org)
259 points by lisper on Nov 7, 2015 | hide | past | favorite | 160 comments



Backwards compatibility is the killer.

The whole design of PGP is to be the envelope to make email private, versus the plaintext postcard that everybody can read. It works with existing servers and existing mail clients.

The biggest Snowden revelation is the importance of metadata. Just knowing whom you talk to, when, is frequently enough to compromise the parties involved. You might be doing something legal now, but you can’t tell when some despot takes over and makes your activity illegal. When data can potentially last your lifetime, you need to consider this sort of thing. Illustration: Using Metadata to find Paul Revere. [1]

So, we need a new mail transmission protocol that limits the amount of metadata that can be seen. Something like Dark Mail [2]. I don’t think Open Whisper Signal hides the metadata.

The problem is that Dark Mail needs to be developed and its security reviewed, and people need to adopt it. It needs new servers and clients, and the prevalence of Windows shows just how difficult that problem is. Also, any new messaging program these days needs a story about web and mobile.

In practice, I heard someone say that the biggest improvement in people’s privacy has been use of Gmail. The clients connect to it with SSL certificates controlled by Google; Google promotes 2-factor authentication and access tokens; and Google’s legal team challenges subpoenas. As long as you are not leaking national security secrets, I think this is adequate until a better system is deployed.

[1] http://kieranhealy.org/blog/archives/2013/06/09/using-metada...

[2] https://darkmail.info


> In practice, I heard someone say that the biggest improvement in people’s privacy has been use of Gmail.

In practice, until Snowden happened, NSA was able to access all the Google's internal data as Google replicated in plaintext its whole datacenters through the links snooped by the NSA or the GCHQ.

http://www.slate.com/blogs/future_tense/2013/10/30/nsa_smile...


Correct. And now they encrypt all the inter-data center traffic.

Question: Is that done on an end-to-end basis? Or do they encrypt the links between data centers?

I want to encrypt a 10g ethernet and all the solutions look quite expensive.

Has anyone done high speed encryption (i.e. 10gbps/1500 byte packets) with strongswan or similar?


You have to encrypt the transport at the application layer when you're talking about the scale of major internet companies--there are so many links going out of each datacenter that it's not feasible to do IPsec or similar lower level encryption. There aren't many hardware devices sold that can do the encryption at the 10-100+ Gbps speeds that datacenter links use (if any), and it's much easier to amortize the encryption cost across all your application cpus versus having a small number of devices or dedicated hosts doing crypto at a lower layer.


> There aren't many hardware devices sold that can do the encryption at the 10-100+ Gbps speeds

You are vastly underestimating AES-NI. Most modern Intel CPUs can handle that kind of encryption throughput (when paired with fast enough memory). Even my old Sandy Bridge-E CPU from 2011 gives me ~96 Gbps of AES throughput (with quad channel memory).


Should we trust the implementation of AES-NI in this context?

http://arstechnica.com/security/2013/12/we-cannot-trust-inte...


AES-NI and RDRAND are different beasts. AES-NI is deterministic, and thus much more difficult to practically backdoor.


Sorry if this is a dumb question, I'm just trying to think it through. It seems like it has to use the key you give it; it seems like the output stream has to be correct, or it just won't work.

I'm not really sure how this would work for network traffic, I guess for disks I would expect CBC mode, which would require an IV? Is that right?

So could the hardware be made to expose the key, or IV's, or leak information in some unexpected way?


Leaking an IV isn't a problem, but leaking the key would be a problem. If AES could be made to leak the key it would be a huge problem with the cipher itself. Otherwise leaking the key would be restricted to some extremely complex timing level behavior of the CPU, which would be difficult to make it all the way to the network level, without network card cooperation (though many 10gbps network cards are intel made).

Perhaps feasible, but far far less likely than compromising the entropy of a random number generator.


I was replying to a question about whether or not you could encrypt the links between the datacenters, where you typically you have large core/cluster routers that handle traffic entering and leaving your datacenter. I was talking about the fact that these routers have dozens or hundreds of 10-100 Gbps ports on them, and are simply not built to do line rate IPsec on each port. So you can't just turn on IPsec for the routers at your datacenter border and magically get encrypted transport between your datacenters.

You can setup IPsec tunnels on your application nodes and use that to get transparent encryption of all traffic, but that is pretty complicated to configure and manage. It's much easier to just have your different applications use an encrypted and authenticated transport when talking to each other over the network, whether they are in the same datacenter, or talking to a remote datacenter. This is how Google and Facebook do their cross datacenter encryption.


I believe end to end is simpler when you control all the ends and can tweak all the software, like they do. There's also an advantage of having very fast processors already provided.


Given the large interest and discussion, I've created a discussion forum here:

https://groups.google.com/d/forum/highspeedencryption

It's public, but please do join if you're interested.

I think it would benefit the community to have an open and central location for best practices of high speed encryption.

Epistasis, Phil21, others, please do join and let's continue the discussion and progress there!


Companies like Ciena make line rate encryption cards that protect links. Not cheap, but not outrageously expensive for that level of gear.


With hardware acceleration, my quad core laptop can encrypt faster than 20gbps.

Without it, it can still encrypt at at 4gbps.

I was going to suggest you could place a couple servers at each end, but you only need one low end server.

Anyone offering expensive solutions is doing you a disservice.


Is that at the network level? My experience is that packet handling in Linux and FreeBSD does not make high speed IPsec feasible.


With things like kernel bypassing and bulk packet processing it's really not that hard to get to 10gbps on a single server. Heck, linux is getting close to ready for 100gbps https://lwn.net/Articles/629155/

I don't know exactly what resources IPSEC takes, but you can definitely get the "or similar" part of the question.

AES-NI takes 3.5 cycles per byte. BLAKE2 takes 3.08 cycles per byte. That means that 2-2.5 cores can handle all the encryption and signing by themselves. Then you have 80% of the machine's resources free just to shovel packets in and out.


It's easy to get 10Gbps packet handling on Linux or FreeBSD, it's easy to encrypt at >10Gbps on Intel, but I would love to know how to get both into a working situation IPsec solution at >10Gbps. Speculation isn't interesting, because clearly all the pieces should work, so I'm looking for anybody with actual experience with it working. I've tried, and been unsuccessful, and also unable to find anybody else on the internet that has been successful.


I am in the same boat you are. I also appreciate your practical and concise point of view.

Yes, it ought to work quite easily, but in practice it doesn't yet, does it?

I'm planning to spin up a few servers in Amazon to test it there just to get some benchmarks. What kind of testing and experimentation have you done? In my use case I would put the traffic through a single machine across a ten gig link to another machine. So I could ensure that all the traffic passing through with it.

The practical use case is to encrypt traffic between data centers. I don't see a feasible way to do that on a megabyte at a time. It almost certainly has to be done on a per packet basis. Either at the application which is a little bit difficult to do in the short-term, or in some sort of IP sec or other similar solution.

No question that the right solution here is to make all the applications do encryption, but again, I need to have a practical solution now, and then the developers can put that in.


I've only messed with a single tunnel at a time. However, there's a whitepaper from Intel from several years ago that says that multiplexing over 6-12 tunnels should possibly work:

http://www.intel.com/content/dam/www/public/us/en/documents/...

The question remains on how to perform routing for this. It appears that running Quagga on both end points, speaking BGP to each other, will allow ECMP over the 6-12 IPsec tunnels. Each individual stream will still be capped at ~1Gbit, but at least the total throughput could be used and there's no traffic between the two endpoints that's not encrypted.

However, getting Quagga + BGP + ECMP + multiple IPsec tunnels + pinning tunnels to particular cores + setting up the RSS flows from the NIC to particular cores... Well it was a bit more than I could bite off at the time. The pieces feel sooo close to working, but it becomes a Rube Goldberg machine of systems software, with each piece operating at the limit of its design properties.

And honestly, I kind of want two orthogonal encryption technologies; TLS has bugs every other day it seems, and IPsec has always seemed a little bit suspicious, and really who knows if my random numbers are really random, so I wanted two layers.

Please do share if you make any progress!


I hesitate to post as my experience is a bit dated as well, but this topic continually seems to come up lately in my circles without a whole lot of authoritative information I've been hearing.

Single-stream IPSec is still a major performance issue. Maybe someone out there is getting more than about 2.5Gbps performance in the real world per stream out there, but I haven't talked to them. This is with some pretty idealistic assumptions too - I would expect most folks real-world performance is orders of magnitude lower.

Scaling it over multiple cores does work, but introduces the complexity that you mention. This still completely ignores the major problem of the fact that single flows are stuck with early-2000's level network performance! With ethernet moving to 40g in high-performance applications, being stuck with 2gbps TCP connections inside the datacenter is not ideal and makes doing global-infrastructure-level ipsec a non-starter. I briefly spent some time considering going 100% ipsec for all inter-cluster communications for a product we were building, but the performance implications put that dead in the water almost immediately. Moving it up the stack into the application was far less expensive - but of course carries with it the cost that you are no longer operating in a "fail safe" environment.

You could make the argument that anything high performance needs to horizontally scale across multiple streams and links - but that's being too idealistic for the current state of technology. Sometimes you just really need to make that legacy NFS transfer go faster.

With the advent of hardware acceleration built into "commodity" CPUs these days - I really did expect more real-world progress on this front both on the linux side and custom asics (e.g. switches/routers). Given the number of colleagues/customers who have asked for similar solutions ("I want 10gbps single stream ipsec") it's not an uncommon problem folks are running across.


For others who are interested, here's a thread that culminates in the 2.5Gbps number:

http://permalink.gmane.org/gmane.linux.network/280175

And here's some StrongSwan documentation about getting there:

https://wiki.strongswan.org/projects/strongswan/wiki/Pcrypt


Is IPsec a hard requirement?

I would expect it to be easier to encrypt and sign a megabit at a time instead of individual packets.

I'm having trouble picturing a situation where it has to be IPsec and it also has to be 10Gbps on a single box.


The original question referenced strongswan, which is IPsec, and where my investigations have focused in the past. But as I understand it strongswan is just for key exchange, and the decryption is still performed in the kernel, and often single-threaded.

It's easy to pass 10Gbps of TLS packets over a link with both sides of the connection doing their own encryption. But if you want to secure against possible unencrypted packets passing over a link, you'd want layer 3 or layer 2 encryption as well. Layer 2 means abstruse hardware, layer 3 means encapsulation of some sort, be it IPsec or whatever. That encapsulation is what I believe that kernels can not currently handle, though they should theoretically be able to. FreeBSD has made some noises about improvements to parallelization to support multi-gigabit IPsec, but I haven't been able to test them.


SMTP with TLS doesn't leak any metadata except the connecting IP address. And with DMARC you can mandate it.

Calls for whole new protocols for email are usually made by people who don't understand the email ecosystem as it currently stands.


But SMTP with TLS is opportunistic -- and if we're talking "national state secrets" it's trivial for a state to MITM the connection and do a TLS downgrade attack.


Not if the receiving smtp server implements DMARC


DMARC doesn't have a STARTTLS policy. I think you mean DANE / TLSA.


In which case you're conceding security to governments anyways.


Sorry Tony, you're absolutely right. Got my acronyms wrong. Thank you :)

(The parent is Tony Finch, a very smart email guy who runs email for the university of Cambridge)


Is DMARC like HSTS for SMTP?


Can the TLS be enforced by the sender all the way through the chain to the destination mailbox?


The sender (the sender's service provider or organization) decides when to use TLS while the message bounces around internally, then (this is the typical long hop) the recipient's server and the sender's SMTP client negotiate, and then the recipient gets to decide.

The sender may have opinions, but as soon as the message is handed over to the recipient's server, that's it. That's the end of the sender's influence, and that's how it has to be on a global internet. People can connect without having their infrastructure approved by anyone else.


No, it can't. The receiver could be using a non secure SMTP server.


Yes, as well as PGP or S/MIME. It just depends or your relay configuration. We've implemented it switch able. We've customers that are bouncing non PGP encrypted E-Mails.


If you actually care about the data, it does nothing. It only guarantees that the front door is encrypted.

With the number of cloud anti spam and archive companies out there, you have no idea what happens between those providers and the mail servers.


I can't agree with this assessment. Personally, the NSA or pervasive surveillance use case is overblown and a low risk threat to me.

Anyone reasonably competent conducting the most cursory of investigations could figure out who I do business with in hours. The NSA boogeyman with omniscient network surveillance capability will come up with ways to find the source/destination of obfuscates messages anyway.

Sensitive data that I routinely handle is content. Employees improperly reading mail (on either sender or recipient side), or some data breach affecting mail servers, desktops, or relays are much higher risks.

PGP is a great way to encrypt ad hoc data at rest. I think the reason it isn't pervasive is that once you figure out what is really secret, you develop specific tools to transmit that data more seamlessly.


What about the chilling effect? People change their behaviors when they know they're being watched. It goes completely against the spirit of the US Constitution. How can I pursue happiness if I'm worried that what I write will be a liability in the future?

EDIT: grammar


I disagree with the surveillance society, but I can't control that policy.

So I choose to focus on what I can control.


I absolutely agree with "focus on what you can do." I think that we can also engage in the conversation as a community and network, which of course you are doing.

The main thing that stuck out to me about your post was "low risk to me." I have studied risk extensively, and I would like to point out that generally, risks are classified with ratings of both frequency and severity. I would categorize the NSA spying as moderate-high risk (to you) due to their extremely high frequency. Yes, the severity to you will be relatively moderate (chilling effect, etc), but the cumulative, unrelenting effect it has is incredible.

As I'm thinking about this more, it's not even a true "risk," because risk always entails uncertainty. You are certainly being affected by it. The true risk of the spying program is that somehow you are hung by your own words, which has an extremely high severity (prison aka slavery). I hope that this rings true to you. Rereading what I've written, it seems like I could be projecting a lot of my own feelings and experiences onto you, but this stuff is just so universal in its effects.

Thanks for discussing this! Have a great day.


My idea has been to have a system that works on top of email but sends out "fraud" email to other participants in the network. The encrypted payload gets unencrypted at each endpoint and has a marker as to whether it was, what i call a "fuzzer" email, in which case it's discarded - otherwise it gets graduated to the users inbox.

Under this system, Eve will see Alice exchanging encrypted messages with Bob and Carol but won't know if they're automated dummy messages or authentic communication.

It lowers the information fidelity through an intentional introduction of noise.

The "fuzzing model" has wide use cases: for instance, a web browser can programmatically "browse" a variety of websites so that Eve can't tell what Alice is truly looking at. Every user in a system will have "red flag" websites in the "auto browse" list.


In these discussions about the importance of metadata, I'm surprised that Bitmessage is rarely mentioned. The Bitmessage protocol is decentralized, trustless, and hides metadata such as the sender and receiver of messages.


I was really excited when I first read about bitmessage. As far as I can tell, it solves more problems than any other messaging protocol. It's a mystery to me why is doesn't get more attention.


Dark Mail turned out to be a joke. Apart from the silly fields like "political party", the fundamental security model's broken. It has essentially zero benefits of "just use Gmail". The suggested security level delegates security to the receiver's mail server. Basically if we just had a " require TLS with verified cert for SMTP" option, that'd do most of what Dark Mail does.

In fact, big companies already so this. When LM talks to the DOD over SMTP, you can bet they have mandatory TLS. Exchange offers this out of the box.

DM offers higher capabilities with no usability benefits over PGP or whatever.

Disclaimer: Since the beginning, I've said the DMA had a zero chance of success. The problem isn't solvable.


We've had mixminion[1] around for years, which handles backwards compatibility with aplomb.

[1] https://github.com/mixminion/mixminion/


Anyone on HN know what Signal does for envelope encryption? AFAICT it's the only crypto app that's OSS and still cares about usability.


I'm reasonably certain all communication with the routing servers is done via https. You could double check their source on GitHub


Backwards compatibility has nothing to do with it. No one can read GPG encrypted mail.

It isn't built into their clients, they don't by default get their own keypairs, it is simply straight up impossible to send encrypted email to even friends and family and have them be able to read it.

Everything else is window dressing to this problem.


How about S/MIME? At least, the programs that I use on a regular basis have it built in. (iOS and OS X Mail, Thunderbird, Outlook). It still does not help at all with the management and distribution of key pairs.


"In practice, I heard someone say that the biggest improvement in people’s privacy has been use of Gmail"

Technologically that might be true, but since Google is a third party they're required to provide their data to the government so practically, the technological considerations like SSL are absolutely meaningless.


I think of it like this: usability can be a security feature.

If you build a "perfectly secure" piece of software, but it takes a very high level of skill to use it, your users will use something else that is easier to use, but less secure. And then how has your ideologically perfect piece of software helped improve their security?

If you make tradeoffs for usability, you will raise the bar because people will actually use what you make.


There is actually a whole paper about this by Roger Dingldine: http://www.freehaven.net/anonbib/cache/usability:weis2006.pd...

Basically, you're more anonymous the more people use Tor, so increasing Tor's usability increases it's security.


For instance, Apple's fingerprint reader is not secure because with enough work you can make a mold of somebody's print and fool it. Except now virtually everybody with an iPhone has it set up so that in practice nobody else can access their phone.

Another thing is key generation. Encryption people demand perfect randomness for the key generation, and that means the key is this crucial piece of data that must be perfectly protected and copied about. That's unusable for most people. Instead, pick the key from 1 billion derived from a user's password. If they use a different computer, or reinstall their OS, or whatever reason then the software just takes a while to try each billion until it finds the one that works.

Or encrypted email. For perfect security you need a safe way to exchange keys beforehand and all kinds of trouble. No. Just have software attach your public key in unencrypted email to recipients. If you receive an email with somebody else's public key then the software starts encrypting to them using it. It's insecure in so many ways, but it would mean the majority of email being encrypted and if done right with almost no impact to the user (password-derived keys, password change automatically sending yourself a new-key-encrypted email with the old key, etc).

Security people need to stop seeing things in black and white. Something can be insecure and "broken", yet still raise the overall level security.


That's the exact line of thinking our lab has been on. Security<-->usability is a tradeoff, and PGP seems to sit too far on the secure end of the spectrum to be useful to most users. We're working on a way to deliver progressively enhanced security, while onboarding less technical users with more usable features.


While mostly true, it's not so clear-cut. While some improvements to usability would reduce the security, there are also things that can be done to make software easier to use and still keep it as secure (for example, in-app help)


Fork Thunderbird or some such client, also make a web client available. Make a new service which offers only encrypted e-mail by default (with a new e-mail address that includes e-mail hosting for your own domain) and provides the key server and everything else. Advertise it as something different from e-mail like encrypted e-mail. Set a new precedent, create a new industry.


Unless they can check their email on their phones, it wont take off. Mobile email clients are much more important than desktop nowadays IMO.


Most "users" don't care about end-to-end encryption and instant messaging has already become the "better e-mail" on phones. The only way I see of breaking into this space is to make something for companies, because they still have reasons for a "better e-mail" and value encryption.


But who would use it? This study is all about getting the average computer user to use PGP. Most people with webmail accounts won't want to switch back to a desktop client and possibly have to change their email address in order to send & receive secure mail.

Besides which, 'make a new client' doesn't answer the main issue, which is how to write a usable client. There are plenty of existing unpleasant PGP clients out there, unless you can detail what makes your new attempt better, it will most likely fail as well.


It has to be an all-in-one solution that you download that has to pretty much do everything for you so your mother can download a file and just go. With a simple wizard that lets you register a new e-mail address and potentially allows you to invite other users via their e-mail addresses... imported from Gmail or something like that.

I would not worry too much about compatibility with existing mail solutions or what not. It's a new service which is completely different from e-mail as far as the user is concerned. More advanced users can use it as e-mail if they want. Who would use it? Anyone who does not want all their mail to be a public spectacle.

Plenty of paranoid Americans out there right now that might be willing to give it a try.


Tutanota (https://tutanota.com/) does that, and even has compatibility with the existing SMTP network. It seems to be web-only at the moment though. I'm pretty sure other systems exist, unfortunately as long as we stay with SMTP nothing will change.


> also make a web client available. Make a new service which offers only encrypted e-mail by default

rainloop[1] already does this.

1. http://www.rainloop.net/


so like hushmail?


No you need to provide a desktop client, open source, by default with everything configured in addition to the service. I guess you should also let users use other IMAP servers but the client should always encrypt all mails it sends out.

Hushmail was always open to court order attack, a desktop client, an open source one, is much less so.

It should always put the name of the client in the subject line or send some unencrypted text along with each message on where they can get a client to view the message. And the client should be able to configure itself with minimal input, sort of like Thunderbird does right now with most e-mail services. And generate, transmit, and store your public key to anyone upon request.


oh ok.

this is actually harder to solve than just 'here have thunderbird'. how do you replicate keys across computers? how do you send e-mail from a new computer when you don't have access to your old computer (it turns out the answer is you don't). how do you back up your keys? where do you store them?

there are some high security answers like "ship our users HSMs" but those are very brittle to common failure modes like "i lost it". I think you'll discover that if you try to create something that is both resilient to average negligent use but still encrypted, you'll wind up with something that is basically hushmail.


I see two major barriers to mass adoption of any crypto system that requires a UI.

1. Abstraction. For the non-expert, the only metaphor that works for PKI is that of physical security. The concept of a "key" as a series of characters or a file that must be protected must be replaced by an abstraction that allows users to protect it in the same way they understand how to protect a key or a wallet.

So long as the "key" continues to appear to be an enormous password, people will continue to treat it that way, along with all the crazy worst-practices in the world. Not to mention the huge host of problems that accompanies key security for the layman.

I don't know what the answer is, but I do think that a physical token of some sort is the right start.

2. At the end of the day, the vast majority of people are not interested in the trade off between better security and convenience.

There is a learning curve, however shallow it can be made, and those that genuinely want or need better security are motivated enough to do the learning. For everyone else, it's like optional homework.

I would add to this that most everyone I know who cares about security is quite comfortable with a large array of APIs and software to do all sorts of cryptographic gymnastics. The bar is quite low already -- it has been for some time, IMO.


See "Why King George III Can Encrypt" (2014) for an alternative metaphor to 'private key' and 'public key':

http://randomwalker.info/teaching/spring-2014-privacy-techno...

>We present the user with four items, a key, lock, seal and imprint. The key and lock serve the purposes of encryption: Alice distributes her locks as widely as possible so that others can send her messages that only she can open with her key. Similarly, the seal and imprint handle signing: Alice passes out copies of her imprint so others can verify her as the sender of messages she has stamped with her seal. Collected together, we refer to these four items as a toolkit this abstraction handles the contingency where a user loses her key but not her seal: we insist that the toolkit represents an indivisible unit that must be replaced whenever any element is lost.


That is actually a really nice metaphor they came up with. The seal and imprint are pretty useful for explaining signed email.

I already explain encrypted email to colleagues with the key and lock metaphor: I give you a box full of open padlocks to which only I hold the key, and you do the same for me. Anyone can have the padlocks as long as you keep the key secure. Seems to work.


> 1. Abstraction. For the non-expert, the only metaphor that works for PKI is that of physical security. The concept of a "key" as a series of characters or a file that must be protected must be replaced by an abstraction that allows users to protect it in the same way they understand how to protect a key or a wallet.

Do you mean something like the Yubikey USB dongle: https://www.yubico.com/products/yubikey-hardware/

[not associated, just referencing a 'physical' item that can be treated as a 'key' by non-experts]


Yubikey NEO has support for using an OpenPGP applet over NFC with OpenKeychain on Android


I've been using GPG since a while now, to sign my outgoing mail. I don't encrypt it as I don't know anyone who uses GPG. I'm still happy to use it, to get used to it, and to see alternative uses. Signing is in my view a big improvement, to make sure nobody has messed with the messages. I always use HTML, so the receiver gets an attachment with the signing hash in it, and no strange text in the mail. I use a signature with a link to my public key and a link to the PGP page on Wikipedia. Everybody can read my mail, and if they want they can validate it.


So if the way the receiver gets hold of your public key is by a link in the signed email, what's to stop an attacker changing the link to a fake key and re-signing the email?

Hmm. Why don't we have a standard place on our domains for public keys? For example for myname@mydomain.com the public key could be https://key.mydomain.com/myname.


Some part of a key exchange needs to be out-of-band.

Alice communicating to Bob what the hash of her public key is over the phone, in person, or otherwise is good enough. Doing that allows Bob to verify whether or not a specific public key belongs to Alice.

The problem with hosting a key on a website is that it only really works if the domain has an extended validation certificate, i.e. someone proved their identity to their domain registrar. Then things just become a question of how much you trust the CA.


Getting a strange attachment is better then getting a few strange lines of text at the end of the email?


Definitely!

With PGP/Mime email clients that don't understand OpenPGP simply show a small attachment. Without PGP/Mime (the old way) you get funky delimiters in your mail body that look scary to non-technical users. I started with the old way (because it appears to be more compatible with legacy email clients) until a colleague worriedly asked me about the weirdness in my mails, so I switched to PGP/Mime. As far as I can tell only really old Outlook versions can't handle PGP/Mime, and you can use plain text mails as well as HTML.


The idea of including a link to your public key and a link the wiki on PGP is great! I'm going to do this as well. Would you mind showing me how you link to the above mentioned?


Here's the code that I used in Thunderbird. I make it small and light in color.

<div style="font-size: 10px; color: #666;">This message is signed using <a href="https://en.wikipedia.org/wiki/Pretty_Good_Privacy">PGP</a>. Public key: <a href="https://pgp.mit.edu/pks/lookup?op=get&search=0xBE5D1E31ABCD1... ABCD1234</a> </div>

I see the link in the code is not closed properly, so you need to fix this.



I agree that some terminology could make it easier. I think "key" is an ok choice for the private key, but for the public key I think there could be a better analogy. Calling them the same thing is a bit confusing and people don't understand that one of them you give away, while the other is confidential.

When I explain it to people I tell them that their public key is like an unlocked lockbox that you give people into which they can put a private message. They can close the box to lock it, but once they shut the box only you have the key to open it.

It's not a great analogy but it seems to me that the public key does not make sense as a "key" to the average person.


I think if it can be made to work, a better framing is in terms of establishing a secure channel and then communicating/sharing something over that channel.

Don't make the user think about the underlying mechanism at all.

The hardest part is probably getting the public key distributed, but that can just be "Check if Bill can receive over a secure channel."


I like this line of thinking. It seems that the word "key" is not the problem so much as the terms describing the respective key's status as opposed to their function. Perhaps something along the lines of "locking" and "unlocking" keys, or "encryption" key and "decryption" key (and, of course, "signing" key, no one has to know this is the same thing as your "decryption" key).


Real secure encryption is and always will be not user friendly because it means only you can know the private key. This means no "Forgot my password" functionalities, no fancy powerful cloud AI analyzing your data and suggesting cool stuff, no free hosted full text search of your data, no open directory of friends to search on etc. So basically, no gmail, icloud, facebook, dropbox etc. It would require a complete new paradigm, sure there are interesting techs such as ethereum and bitcoin but they are not user friendly, and don't scale much to the regular user (downloading the whole blockchain is a no-no to most people for example) but most importantly, current solutions are too convenient and work really well (bar privacy and security issues most don't care about). So you'd need a tech that not only is way better, more efficient and smarter than current ones (gmail, google, facebook, icloud etc) but also secure and decentralized. Not happening any time soon but hopefully not impossible either.


Because I have no interest in memorizing keys, I store them on disk protected by a passphrase. I don't know if this counts as "Real secure encryption" but it's sure a lot more secure than more typical use of email.


You can store it on disk protected by a passphrase but that's still not user friendly as way too few people would be willing to or capable of doing so which was my main point. My point is that secure encryption is by definition impractical, inconvenient and not user friendly which makes its adoption by mainstream users very unlikely at least in the foreseeable future.


I think it's easy to pick on a weak example, but much progress has been made since the original "Why Johnny Can't Encrypt".

A recent example: Textsecure / Signal has been very, very smooth for me and I doubt it'd be much more difficult for laypeople either: https://whispersystems.org/


Although, like mentioned in above comments, Signal does not hide metadata, which can be as valuable to attackers as the contents of the messages themselves.

Also, in terms of adoption, it's still hard when everyone has phones that use iMessage or SMS by default. iMessage is end-to-end encrypted, but not compatible with Signal, and Android has no equivalent baked in. Using Signal requires two extra steps for the user 1) downloading/installing Signal 2) knowing about it and caring enough to use it in the first place.


Agreed, Whisper is doing this right.


PGP does have a legitimate use case, namely Edward Snowden: he's technical, the NSA is his main threat vector, and it makes sense for him to spend lots of time reasoning about the web of trust, signing keys, double-checking fingerprints, and creating 12-minute explainer-videos for people he needs to communicate with (for reference: https://vimeo.com/56881481).

Here's the problem: not everyone needs the stringent encryption guarantees that Snowden needed. There is value in crypto systems that are less demanding than PGP, especially if they come with vast increases in usability.

iMessage, I think, is what most people need in their everyday life. It's end-to-end encrypted, and although it has a central key-server, it is backed by Apple, which has publicly announced their commitment to user privacy. Yes, a central key-server means that you can't verify the fingerprints Apple gives you, but again, that's not what iMessage's encryption was designed for. If you need that level of certainty, use Signal.


What a non-sense.

We live in the information era, information nowadays is the ultimate power. Democracy depends on the balance of power between the people and the government. If the government has all the data, it has all the power.


Apple is committed to destroy Google, not to protect user privacy.


Hey HN, I'm one of the authors on this paper. I'd be happy to answer any questions.


> While our results are disheartening, we also discuss several ways that participant experiences and responses indicate how PGP could be improved.

Have you approached the developers of both Mailvelope and Gmail to discuss these improvements? How did they respond? Also, have you participated in usability discussion with the OpenPGP developer community at large? Any insights?

I ask because it seems like everyone who isn't very active in the OpenPGP developer community thinks that usability is a high priority. But, in my experience, when you start to bring up the topic of user-friendliness in the mailing lists, you get resistance or apathy. It seems like this is a problem of culture and incentives. How can those issues be addressed?


We haven't reached out to Mailvelope/Gmail developers, nor have we opened a usability dialogue with the OpenPGP community. We're still in the process of getting these results published. Your experience with that community is interesting though.


Please talk to the OpenKeychain developers. That's the best PGP implementation I know of. Also keybase.io.


Given the usability and technical problems with PGP, do you think it's worth saving?


For those who value security enough to take the pains to learn the system, yes, it's useful. For your average netizen, we feel a progressive approach would be more effective. Start out with a key escrow service and if the user wants more security, they can ratchet up to PGP.


It's one of only 2 or 3 techs mentioned in Snowden leaks as capable of blocking NSA. GPG specifically. That's enough reason to build on and around it until we have a usable solution just as strong.


Should be considered with moxie's piece: http://www.thoughtcrime.org/blog/gpg-and-me/


This paper should be titled: Why Johnny Still, Still Can't Use Mailvelope: Evaluating the Usability of Mailvelope.


Yes, we probably would have gotten better usability scores if we developed our own PGP client with a better UX and tutorials. We selected Mailvelope because it was rated highly on the EFF's secure messaging scorecard [1] and we were exploring how the state-of-the-art in PGP software actually performed with end users.

[1] https://www.eff.org/secure-messaging-scorecard


There is a lot wrong with that scorecard, and it's dispiriting to hear that it is actually influencing research.


Interesting; I'd really like to hear what issues you have with that scorecard.


Matthew Green and I had a bet for the last year, which just ended, over libotr's security; I bet him that nobody would find a sev:hi flaw in it all year, and, of course, won, because at this point all the low-hanging fruit in libotr has been shaken out.

That bet was a reaction to the release of the EFF scorecard, which at the time gave Cryptocat(†) a perfect score but dinged ChatSecure, which is a libotr client, for not having an audit done.

I told Matthew Green I'd write up something about the bet, and what did get reported to me about libotr; I'll probably spend a few thousand words critiquing the scorecard there. A brief outline, though:

* There are places where the scorecard is factually misleading. For instance: there seems to be no coherence to what "source code available for inspection" means; it still lists Telegram as being open source!

* It's oversimplified in misleading ways as well. Systems which technically have the capability of verifying peers are given big green checkmarks even when that feature is so broken as to be useless. And, of course, there's the "been audited recently" checkmark, which, as anyone familiar with software security auditing will tell you, means absolutely fuck-all (again: ponder the fact that libotr, which has been a high-profile target for something like a decade and is more or less frozen stable, was counted as "not audited", while projects that got a 1-week drive-by from a firm specializing in web security got a big green checkmark).

* What does "security design properly documented" even mean? Where's the methodology behind the chart? A few paragraphs of text aimed at laypeople isn't a documented methodology! The one place they eventually did add documentation --- "what's a good security audit" --- tries to explain a bunch of stuff that has almost nothing to do with the quality of software security inspection, and then throws up its hands and says "we didn't try to judge whether projects got good audits". Why? Why didn't they consult any named outside experts? They could have gotten the help if they needed it; instead, they developed this program in secret and launched it all at once.

* The project gives equal time to systems that nobody uses (at one point, Cryptocat was near the top of the list, and ChatSecure was actually hidden behind a link!), and is ranked alphabetically, so that TextSecure, perhaps the only trustworthy cryptosystem on this list (with the possible exception of the OTR clients) is buried at the bottom.

* If the point of this chart is to educate laypeople on which cryptosystem to use, how is anyone supposed to actually evaluate it? They don't really say. Is it ok to use Jitsi's ZRTP, despite missing the "recent audit" checkbox? What about Mailvelope, which is missing the forward-secrecy checkbox? Can anyone seriously believe it's a better idea to use Telegram or Cryptocat, both flawed ad-hoc designs, than TextSecure or ChatSecure?

I guess I can't be brief about this after all. Grrr. This scorecard drives me nuts.

I am not saying that these flaws in any way impacted your particular research project.

PS: an old thread on this: https://news.ycombinator.com/item?id=8557654*

31580544b27c10736ebe1bdd05a56d96c486823d563d4493c317548976c3d8db


What would be a better way to produce such a scorecard?

Is there already any collection of common criteria established by computer scientists and accepted by experts and / or any kind of standardization of requirements for secure software that was produced by leading security capacities that allows to extract data for a compact visual comparison like the eff scoreboard?

Would you like to provide or show me a link or any material that compares "the security" of products and offers an understandable and "industry-accepted" categorization?

Isn't it a bit strange that a small organization of non computer scientists produce something that was painfully missing for at least 50 years? Isn't it clear that a first approach to such a thing must fail and that this can only be a prototype for a process that should be adopted and worked out by people who understand what they are doing?

Isn't it a bit strange, that there is no such thing as that scoreboard produced by an international group of universities and industry experts, with a transparent documentation of the review process and plenty of room for discussion of different paradigms?

The eff scoreboard demonstrates painfully the obvious omissions of multiple generations of security experts who failed to establish a clear definition of what security exactly means, how to discuss it and how to find an acceptable approach to establish a thing that would allow to be named "review" in the scientific meaning of the word.

It is totally clear that Apple and Microsoft have very different ideas about security than OpenBSD developers, but it would still be of great value to have a space where people could follow that discussions and compare the results of different approaches and solutions to security related problems.

The eff scoreboard carries the embryo idea of a global crypto discussion, review, comparison and knowledge site that could also serve as a great resource for non-crypto people and students to learn a lot about that field. The highly valued information you and other experts are dropping here and there in HN threads and/or on various mailing lists should be visible in a place that collects all that stuff and allows for open discussion of these things in the public, so people can learn to decide what security means for them.

If such a thing exists, please show me.

If not, please build it.


There is not. Software security is a new field, cryptographic software security is an even newer field, and mainstream cryptographic messaging software is newer still.

The problem with this flawed list is that it in effect makes endorsements. It's better to have no criteria at all than a set that makes dangerously broken endorsements.


Where can we read more about what's wrong with this scorecard ?


It's worth mentioning, the original paper [0] recently won the USENIX Security 2015 Test of Time Award [1], which highlights papers at least 10 years old, that have made a lasting impact on their field.

[0]: http://www.gaudior.net/alma/johnny.pdf

[1]: https://www.usenix.org/conferences/test-of-time-awards


What ever happened to Google's "End-To-End" Chrome extension? https://googleonlinesecurity.blogspot.com/2014/06/making-end...


It's still being worked on. Turns out, getting these things right from a crypto and a UX perspective is hard.


i'm not entirely sure this is worthy of a paper... its pretty much a common sense conclusion to reach if you've ever tried to use these things that the UX is usually terrible - even by the standards of a technical user.

too many manual efforts need to be made, even after setting up software that is purely designed to make this sort of thing easier.

mailvelope is a good example though - i've tried using it, but its not very clear what to do with it, and even once you get to the point of having the button appear in your gmail or whatever you have to then go and do even more things before you can actually send something that is encrypted. optimising this workflow should be trivial... a lot of this stuff can be done for you, e.g. by automatically enabling for common webmail sites, automatically generating some keys for you etc.


I think we should start with an EXTREMELY simple app that does not attempt to do any fancy e-mail client integration, but is aimed at making it possible for non-nerds to send one-off secure mails and files in a way that is at least SLIGHTLY fun.

Keybase, GPGTools, and all of these things have different features and menus and all kinds of stuff which makes them EXTREMELY intimidating.

My mom is a teacher of computers, she is an expert at Excel and basic Windows things and Twitter and mail and she's even somewhat into Bitcoin, but crypto makes her instantly freeze into "oh my god what is happening I am terrified and confused" mode.

Today she needed to send some encrypted JPGs to verify her identity for some Bitcoin wallet thing. The support asked her to please encrypt her stuff with our public key.

When she opened that public key it came up in Notepad++ as a huge intimidating block of random characters. That was probably enough to put her in a bad mood. After that she wasn't very receptive and I guarantee that every single guide or tutorial or program would be WAY too nerdy, boring, and complicated for her to follow step one.

Anyway I just did it for her on my Mac and I explained my thoughts as I did it and she said "see I understand the concepts but it's just too much stuff to know about to actually do it myself."

I said "yeah, that's what Snowden keeps saying, these things are just too difficult for normal people. Even Glenn Greenwald, that journalist you know, apparently he sucks at computers and it took him months to even install the crypto app."

So I wish I could have said "just go to crypto.express" (random made up available memorable domain) and follow the easy steps and you can encrypt your zip file pretty easily with just drag and drop and it's totally made for newbies but still Snowden-level secure.

I don't really think people in general relate to the paranoid wish to encrypt all your mail. They probably think more in terms of specific secrets that they care about protecting. So you could get them started on use cases like sending passport photos, passwords, codes, credit card numbers, or dick pics or whatever it may be.

Once they have some tiny bit of successful experience with using private/public keys that one time, and it wasn't super impossible like they thought, they might be more open to learning more sophisticated ways of using crypto, such as automatically encrypting and signing mail.



I just tried it and it seems like a pretty good start!

It doesn't seem to have launched publically yet but I hope they will have a landing page with utterly clear copy in a large font written with the mindset that for example this sentence

"Once you have entered your email address, SC4 will automatically provision you with a set of random keys."

is terrifyingly incomprehensible gobbledygook.

...actually I might be confused right now but even I (a total smartypants guy) can't seem to figure out how to paste in somebody else's public key in there. The default option is to encrypt for myself, and I see no way to input another key, only a text box for the message...

Edit: oh, maybe the text box is for the keys, and the thing I want to encrypt should be drag and dropped? Either way, the UI as it is right now is difficult to understand, but I suppose it is a prototype.


Thanks! "They" is me :-) And yes, you can paste anything into the text box and the app should automagically do the "right thing" with it.


Keys are stored in local browser storage. How does this work when browser cache is cleared?


Cache is separate from localStorage. Clearing your cache should have no effect on your SC4 keys. (Also, if you're running one of the two standalone versions, your key is stored in a file.)


At least in Firefox, clearing cookies (not an unreasonable thing for users to do once in a while) will also clear localStorage.


That's true. There are limits to what one can do when running in a browser. If you have a better idea for where to store keys, I'm all ears.


I mean using (Keybase) https://keybase.io/ is pretty easy


I created an account there and never used it ever again, same for most people I know. Even the rock stars featured there don't seem to be using it. Also, in what world is a cli tool with a git-like interface "pretty easy" outside of the tiny world of computer programmers?


Same here. I set up a Keybase account and never used it. And I'm somebody who actually uses PGP every day.


I have a keybase account that I used it the other day to have a few secrets sent to me from various sources. I have to say, even as a complete geek, I wasn't that thrilled with the experience.

I have a new machine so I had to go through the setup process. First I had to install node/npm, and then their stuff, and then some other PGP apps. The whole thing felt really clumsy.

I'm a bit of encryption novice, but for what I get, it feels like the openssl tools on my mac should be able to do everything without having to jump through all the hoops.

Then there's the user experience on the other side. I had to send people to the keybase page to get them to generate the message and then email / send that to me. That's annoying and probably verges on terrifying for non-geeks.

Afterwards I thought to myself, why can't I sent up an html / js page with my public key in? It would have a single box called "Send something to me" and it could do the encryption client side and then fire me an email, or stick it in firebase, or whatever.

I know this doesn't solve all of the issues (trust etc) but when I'm just trying to give customers a way of getting me a password or something without them sending it via email it goes a long way and removes a lot of the friction.


this is specifically the problem they are trying to solve, too


One click encryption is one click too many


I personally have started to use enigmail in Thunderbird a lot more since it has its newer, much more intuitive interface.


If someone could use your public SSH from say Github and send you a private message that would be great. From my understanding that is not possible. You need to generate and exchange PGP keys before, a major UX issue. Allow SSH public keys to be used and we'll be a lot closer to fixing the secure email issue.


Also Facebook (which has an official onion site to help if your internet connection is suspect) has GPG as a standard part of all user profiles. This would be a much more normal person directory than GitHub or MIT ( yes someone on HN suggested MIT as a mainstream PGP directory)


Friends (https://moose-team.github.io/friends/) allows you to use your SSH keys but it's not email anymore.


That's pretty much what https://keybase.io does, in a roundabout way. It links your public identity across many social networks to a PGP key, so you could say "let me talk to kstrauser on GitHub" (or on Hacker News!) and it would map that back to my public key.


But they can, it's just not PGP.


It's unrealistic to expect naive subjects to learn how to use GnuPG in an hour. It takes longer than that to learn a new game! People need at least to know what public-key cryptography is before trying to use it. But unfortunately, the Wikipedia entry is rather intimidating.[0] The GnuPG FAQ (at 7.1) is clearer, but not so easy to find.[1]

[0] https://en.wikipedia.org/wiki/Pretty_Good_Privacy

[1] https://www.gnupg.org/faq/gnupg-faq.html


You said "People need at least to know what public-key cryptography is before trying to use it." I think this is untrue. People really just need to know how to work with the technology, not understand what it's actually doing. Trying to explain the underlying principles makes it more confusing and thus, less safe.


I mean basic stuff. Users need to know that they have both public keys and private keys. That public keys allow other users to encrypt stuff to them. So they need to share them. That private keys allow them to decrypt stuff encrypted to their public keys, and also to sign stuff. So they need to protect them, keep them private. Failure to get those basic facts accounted for much of the craziness and confusion in the cited study.


This is something that the UX people of gnome could actually be useful for..


I'm yet to be convinced they'd be useful for anything.

Other than, perhaps, signing on to a competing project to bring it down as well.


But they've made gnome worse, not better.


Worse for some people and better for other. My mother (58) and my grand mother (82) love how easily is to do what they usually do contrasted to the windows 8 experience.

I don't know if that is a common case or not, but at least it's not a definitive "worse" or "good". And for me, it's great as I never got a call to ask me where is something anymore.


Well, my little sister (12) and my dad (56) love KDE Plasma, as it’s "simpler than windows".


Hi guys,

Maybe you should try Jumble (www.jumble.io) for email encryption - works with Gmail, iOS and MS Outlook.

Jumble manage key pairs for users and encrypt the private key with the users password so they don't have access, this ultimately simplifies the whole process for the end user and frees them from the key management aspect which is the major pain-point with PGP-based solutions. Also, it integrates on top of existing emails clients so no need to change your email address or how you interact with emails.

Full disclaimer: I'm a co-founder of Jumble


At this point the only way for PGP or really any user-based encryption to get widespread use, gmail, yahoo and microsoft need to adopt it and integrate it into their webmail client, possibly as a semi-mandatory feature that's opt-out instead of opt-in.

Anything less will fail as no one is interested in getting a new email address.

Anecdotally, I've tried three of four times to learn how to use PGP and integrate it into my workflow and failed each time, so it definitely needs work (at least if they want me on board :))


Our approach here is to make security usable to developers via a programmatic interface/API, so users' data is better protected without them having to do anything at all.

https://www.trycryptomove.com

Eventually if/when we think about making consumer-facing wrappers to CryptoMove, the key is going to be usability. Security will need to be as easy/fun as consumer applications. Easier said than done though.


I think GPGTools for OSX does a great job of a client.


I've been pretty happy with it, but I'm also a programmer with interest in and knowledge of why it's important and how it's useful. I can see non-technically-inclined family and friends being very confused by it.


To bad theres no GPGTools for iOS. And the chances of Apple opening up the mail client for it is slim. Also, the licenses could be a problem?


Seconded. Unfortunately the webmail services don't have the same simple interface.


A lot of baseline configs can be implemented to setup opportunistic crypto, for what Ben Brookes calls 'starbucks hacker bob' https://brooksreview.net/2013/06/encrypting-stuff-against-st...


I have a budding idea that involved using a blockchain to manage cryptographic credentials, but I'm having trouble finding proper blockchain libraries. Any recommendations?

I'd prefer Python or Go, since that's what I'm mostly using these days, but at this point I'll take brainfuck if that's my only choice...


For occasional pgp use, a webapp is as easy as it goes. https://ssl.ba.net/util/pgp/

No need to store or remember keys, just create a new key for your conversation.

All encyption / decryption is done client-side with open source code.


PGP's UX is so dismal I could almost believe the whole project operates under an NSA false flag.


The problem is a lack of funds. It was concieved for nerds by nerds and never got enough funding for UX.

A common state in the FOSS universe.


That's not funding. That's boredom or perhaps ideology. Most FOSS developers don't bother thinking about UX or even implementing a UI because it's boring. They're interested in the high-tech stuff that their software can do and hope somebody else will put a front end on it. But usually nobody does - at least not an open source front end - because all other FOSS developers are also more interested in the specific technical thing their software does.

I make money developing a front end for a popular but absolutely unusable piece of FOSS software. Other companies do too. It sometimes feels a bit wrong that the open source guys have done the biggest and hardest job for free, while I profit just slapping some windows and dialog boxes over the top of it. But they really don't want to do that part of it and have openly said they think their UI is fine and that people would benefit from learning to use a command driven interface.


Isn't there a firefox extension that can just do some PGP stuff with some right click in gmail ?



Did you guys read the paper, or even the abstract? This whole paper is about Mailvelope and the difficulty people had using it.


Maybe that's the joke.


This doesn't solve the problem of email encryption, but I made a (web-based) file encryption app with the goal of usability: https://hypervault.github.io/


Johnny can't even select, download, use, or configure an email client.


...not to mention that in most cases, Johnny doesn't even really know what encryption is, let alone believe he's capable of it.

Try asking someone to encrypt something, anything before sending it. The looks you'll get range from sideways or merely confused to fearful and panicked, to are you fucking kidding me?


Given the fact that https://protonmail.com is still offline after several days of attacks, will secure mail ever get a chance?


Here is a list of steps to send a signed e-mail:

http://davidjarvis.ca/dave/tech/privacy.shtml

Once signing is in place, you can send an encrypted e-mail:

https://support.mozilla.org/en-US/kb/digitally-signing-and-e...

Here's the response from Robert Hansen on why Enigmail has such a complicated process:

> Here's the heart of the problem: "Modifying the wizard to account for average users" works only if you can define an 'average user'. And you can't, the same way that I can't. Please don't misunderstand me, it's not my intent to slight you or your willingness to do a lot of hard work. (Quite the opposite: I really hope you're willing to do it!) But we have to start by looking at the realities, and the reality is the the GnuPG-using community is incredibly fractious and prone to declaring holy wars over the most trivial and irrelevant of details.

> If you set key generation to default to 4096-bit keys, a lot of people will roll their eyes and accuse you of subscribing to key length fetishism. If you leave the default at GnuPG's default of 2048-bit keys, a lot of people will scream that you're not providing enough security.

> If you default to inline PGP, international users and people who love HTML mail will complain the default doesn't work for them. If you default to PGP/MIME, people who post to mailing lists will scream that your change just broke their communications. (Not a joke, incidentally: there are a lot of mailing lists that strip all attachments as a security measure... which has the side effect of completely breaking PGP/MIME.)

> The reason why we ask so many questions in the wizard is not because we're afraid of losing existing users, although that would definitely be a consequence if we were to go with your more simplified way. The reason why we ask so many questions is because we cannot know which option a user will demand to use and will think should be enabled by default.

> If we can't come up with a default that will work just fine out-of-the-box for 95% of our userbase, then we're not going to make it a default. We'll ask the user for their preference. And that's the reason why our wizard asks so many questions: we ask questions when there is no clearly correct default.

I think he's wrong. Flexible software would have a default wizard for basic users and an advanced wizard for users who care about the difference between 2048- and 4096-bit keys. It's one question at the start of the install process, "Would you like to tweak encryption settings?" -- Yes, No, and "What does tweak mean?" are possible answers. Or install with default settings and guide power users to tweak the parameters post-install. Encrypted e-mail should not require over 90 steps.


When I use my phone to pay for things in a shop, there's surely a lot of complicated security going on, and a bit of ugly technology is even exposed when it displays a QR code for the cashier to scan. But nowhere ever did it ask me any questions that required me to learn any weird technical concepts. I never had to choose a number of bits or know anything about what a "key" is. As soon as a user sees these things, they have to either make a random guess or go off for hours teaching themselves what it means. It's a complete fail.

The words "PGP/MIME" should never appear on a PGP email plugin! When you send a regular file attachment, you don't have to specify the MIME type. It's all silent and hidden.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: