Hacker News new | past | comments | ask | show | jobs | submit login
The PGP Problem (latacora.micro.blog)
483 points by bellinom on July 17, 2019 | hide | past | favorite | 368 comments

I was an engineer at PGP from 2004-2011 and ended up running the server team as lead engineer. I wouldn't disagree with most of the points brought up by the author, both the code base and standard has accreted over time and it's incredibly complex. There were only a couple of people on the team that really understood all the facets of either the OpenPGP or SMIME/x509 standards. It's made worse in that it was a hack on top of another system that had accreted over time (email) and that system was never intended to be secure. We had massive database of malformed or non-compliant emails from every email client since the beginning of time. The sub-packets that the author mentions were primarily used for dealing with forwarded chains of encrypted email messages where each previous message had it's own embedded mime-encoded encrypted/signed messages and attachments.

The problem is that no-one else has gone through the process of establishing a community/standard that's capable of replacing it. Each system has built their own inoperable walled garden that doesn't work with anyone else, and none of them have a user base large enough to make encryption easy and pervasive.

My own secret hope is that Apple is forced to open iMessage as a part of an anti-trust action and that acts as a catalyst for interoperability.

Forcing iMessage to open will immediately result in MITM iMessage proxies that users can use to store iMessages that are meant to auto-delete, so that they can violate the wishes of the other party. These do not exist today because Apple binds iMessage to your hardware and bans your entire device when anyone is found to be operating such a service, either for themselves or others.

Do you want open source clients that can be altered to ignore all privacy criteria — or do you want closed-source clients that make a good faith effort to adhere to auto-deletion protocols?

Pick one. There is no middle ground.

You can violate the wishes of the other party by taking a screenshot or, in the extreme, a photo of the screen. You're only preventing the very lazy/unmotivated from retaining messages.

Correct, screenshots are a viable attack against both closed and open source platforms. Preventing casual retention is the best you can hope for, and is a worthy goal regardless that it does not result in the perfection of a Faraday cage’d clean room

So, your threat model includes MITM servers, but not cameras? It seems a little silly to worry about the MITM problem when you can simply snap a photo already.

They are both valid threat models, but ones which for me have different meanings.

Screenshotting or photographing the screen of a device owned my my intended message recipient is a reasonably small problem to me. If my recipient wants to expose a message I've sent them, they're going to be able to do that. I never expected any more privacy for that message than I'd have accepted based on my trust in that person.

MITM servers are a whole other thing. Large scale surveillance of all users of a specific server, "full take" collection and searchable databases of messages available effectively forever to unknown current and future opponents?

Different threats. Yeah, I'm happy enough to accept the risk of cameras in the hands of my correspondents. Way happier than I'd be with MITMable servers (or services that can add "ghost users" as the UK seems to be proposing).

I might be missing something, but how would large scale surveillance with searchable databases be possible with e2e encryption? They could save the messages, but they would still be encrypted.

If you have to get into a legal battle with someone about misuse of information, it's much better that you are able to focus on the sender and the recipient of the information as potential sources for that information instead of also having to go after every potential network hop as well.

Actually apples approach results in a MUCH lower level of retention than other providers even if someone can screenshot all conversations

Just like with Snapchat, Auto-delete is a fantasy and is not worth sacrificing security or privacy for.

iMessage doesn't have any kind of auto deleted messages - it's a feature that messages are persistent across all your devices.

Incorrect. Audio messages are deleted two minutes after playback by default.

Which is a receiver-side setting and can be set to one year. Your point is moot.

For me, the two choices in settings are "after two minutes" and "never", nothing in between. As you said, this is not a security setting, it's a storage space-saving setting.

This is unrelated to the current discussion and not meant to contradict your "Your point is moot", but instead just as a hopefully useful anecdote: In my experience, this requires the sender to choose 'Keep' too. I have been bitten several times by me sending an audio message to my wife because I was in a situation in which typing was complicated for me (outdoors, plenty of sunshine, I don't have the best eyesight), only to find that she never even got to see it because it got self-deleted after a few minutes.

My conjecture from looking at how this has worked for me is that the sender must choose 'Keep' so that the audio message stays on the receiver's phone until listened, and the recipient must choose 'Keep' so that the audio message stays on their phone after listening.

I, of course, have no proof of this other than my own experience on devices a few years old (iphones 5 and 6).

I also had some odd occurrences like this, and I simply stopped using voice messages over iMessage. It hasn't really penetrated the local phone culture so to speak, so it isn't a problem. Those that I do use it with happen to be on WhatsApp, which retains messages forever or something.

That's a client side feature meant to save disk space.

> The sub-packets that the author mentions were primarily used for dealing with forwarded chains of encrypted email messages where each previous message had it's own embedded mime-encoded encrypted/signed messages and attachments.

If you ("you" here being the PGP team) knew going into the design that the use-case of ASCII-armored-binary (.asc) documents is specifically transmitting them in a MIME envelope... then, instead of making .asc into its own hierarchical container format, why didn't you just use MIME, which is already a hierarchical document container format?

I.e., if you're holding some plaintext and some ASCII-armored-binary ciphertext, why not just make those into the "parts" of a mime/multipart container, and send that as the email?

Then all the work of decoding—or encoding—this document hierarchy would be the job of the email client. The SMIME plugin would only have to know how to parse or generate the leaf-node documents that go into the MIME envelope (and require of the email client an API for retrieving MIME parts that the SMIME parts make reference to.)

And you'd also get the advantage of email clients showing "useful" default representations for PGPed messages, when the SMIME extension isn't installed.

• Message signature parts would just be dropped by clients that don't recognize them. (Which is fine; by not having SMIME installed, you're opting out of validating the message, so you don't need to know that it was signed.)

• Encrypted parts would also be dropped, enabling you to send an "explanation" part as a plaintext of the same MIME type to the "inner" type of the encrypted part, explaining that the content of the message is encrypted.

I guess this wouldn't have worked with mailing lists, and other things completely ignorant of MIME itself? But it would have been fine for pretty much all regular use of SMIME.

Have you looked at DIME?


I can't see that happening: There are multiple other messaging apps that are clearly successful and popular on iOS and macOS, and neither iOS nor macOS are the majority OS.

Until a better solution is brought forward, my team implemented a PGP packet library to make everyone's lives a little bit easier: https://github.com/summitto/pgp-packet-library

Using our library you can generate PGP keys using any key derivation mechanism for a large variety of key types! When using it right, this will greatly improve how you can generate and back up your keys!

I did some "OpenPGP Best Practices" work for a client recently. They don't have a choice, because a third party requires it. The goal was to make sure it was as safe as possible. One thing that struck me is that I have a simplified mental model for the PGP crypto, and reality is way weirder than that. The blog post says it's CFB, and in a sense that's right, but it's the weirdest bizarro variant of CFB you've ever seen.

In CFB mode, for the first block, you take an IV, encrypt it, XOR it with plaintext. Second block: you encrypt the first ciphertext block, encrypt that, XOR with second plaintext block, and so on. It feels sorta halfway between CBC and CTR.

Here's the process in OpenPGP, straight from the spec because I can't repeat this without being convinced I'm having a stroke:

   1.   The feedback register (FR) is set to the IV, which is all zeros.

   2.   FR is encrypted to produce FRE (FR Encrypted).  This is the
        encryption of an all-zero value.

   3.   FRE is xored with the first BS octets of random data prefixed to
        the plaintext to produce C[1] through C[BS], the first BS octets
        of ciphertext.

   4.   FR is loaded with C[1] through C[BS].

   5.   FR is encrypted to produce FRE, the encryption of the first BS
        octets of ciphertext.

   6.   The left two octets of FRE get xored with the next two octets of
        data that were prefixed to the plaintext.  This produces C[BS+1]
        and C[BS+2], the next two octets of ciphertext.

   7.   (The resynchronization step) FR is loaded with C[3] through

   8.   FRE is xored with the first BS octets of the given plaintext,
        now that we have finished encrypting the BS+2 octets of prefixed
        data.  This produces C[BS+3] through C[BS+(BS+2)], the next BS
        octets of ciphertext.

   9.   FR is encrypted to produce FRE.

   10.  FR is loaded with C[BS+3] to C[BS + (BS+2)] (which is C11-C18
        for an 8-octet block).

   11.  FR is encrypted to produce FRE.

   12.  FRE is xored with the next BS octets of plaintext, to produce
        the next BS octets of ciphertext.  These are loaded into FR, and
        the process is repeated until the plaintext is used up.
Yeah so CFB except your IV isn't your IV and randomly do something with two bytes as... an... authenticator? And then everything after that is off by two? This isn't the only case where OpenPGP isn't just old, it's old and bizarre. I don't have a high opinion of PGP to begin with, but even my mental model is too charitable.

(Disclaimer: I'm a Latacora partner, didn't write this blog post but did contribute indirectly to it.)

I think this is called Plumb CFB. It was invented by Colin Plumb back in the day. I first saw it in their FDE products which didn’t have a good IV generation process (kind of acting like a replacement for XTS or a wide block cipher) and no, I don’t know what it’s for.

So what do I use for encrypted messaging that can, like, replace email, then? Nobody seems to have provided any sort of satisfactory answer to this question. To be clear, an answer this has to not just be a secure way of sending messages, it also has to replicate the social affordances of email.

E.g., things distinguishing how email is used from how text-messaging is used:

1. Email is potentially long-form. I sit down and type it from my computer. Text-messaging is always short, although possibly it's a series of short messages. A series of short emails, by contrast, is an annoyance; it's something you try to avoid sending (even though you inevitably do when it turns out you got something wrong). Similarly, you don't typically hold rapid-fire conversations over email.

2. On that point, email says, you don't need to read this immediately. I expect a text message will be probably be read in a few minutes, and probably replied to later that day (if there's no particular urgency). I expect an email will be probably read in a few hours, and probably replied to in a few days (if there's no particular urgency).

3. It's OK to cold-email people. To text someone you need their phone number; it's for people you know. By contrast, email addresses are things that people frequently make public specifically so that strangers can contact them.

So what am I supposed to do for secure messaging that replicates that? The best answer I've gotten for this so far -- other than PGP which is apparently bad -- is "install Signal on your computer in addition to your phone and just use it as if it's email". That's... not really a satisfactory answer. Like, I expect a multi-page Signal message talking about everything I've been up to for the past month to annoy its recipient, who is likely reading it on their phone, not their computer. And I can't send someone a Signal message about a paper they wrote that I have some comments on, not unless they're going to put their fricking phone number on their website.

So what do I do here? The secure email replacement just doesn't seem to be here yet.

Your point 1 especially speaks to me. Phone-based messaging in general isn't appropriate for the things I would most like to be kept private between me and a recipient, because those sorts of things can't be created on a phone. I've found PGP pretty good for making that happen, when I'm working with someone who a) uses PGP also and b) exercises some caution when using it. I haven't found an option that I can trust that will work for me.

I strongly agree with this; email is not instant messaging, and there is not yet any secure replacement for email.

We need a modern design for a successor protocol to email, and no one is working on it because they prefer instant messaging (or think other people do).

Google has tried with Wave but failed. I don't think it can be done.

Everything would have to support SMTP as a fallback for the lot of people that just don't care and thus couldn't actually improve.

I think we're more likely to "accidentally" end up there via E2EE document collaboration tools that are in development now.

One day, a while after they become usable and common, people will just realize they've been sharing documents E2EE in place of sending email, and they'll be using it for basically everything that matters.

It would be a proper restart and allow for significant improvements in usability and security and everything else.

Can you point me to examples of such tools?

Still under active development, nothing is really ready to use yet

Do you feel like PGP is a good way to cold email people in practice? (I'm not trying to put words in your mouth, but that sounds like what you're saying.)

> Do you feel like PGP is a good way to cold email people in practice?

Not OP but I can definitely say that's a yes from me after doing this repeatedly. I cold email people once a month or so, and if it is to do with anything sensitive I'll check to see if a public key is available for them (on their website is best, else I check a public key server and use that key as long as there is only one listed).

I get a better response rate from PGP/GPG users too, I can only recall one not responding to an encrypted message and I sent a follow-up message unencrypted which they responded to.

I think it's important to send PGP messages for ordinary communications whenever possible, because this normalizes it and may increase the workload for those trying to defeat it.

Good question. Not sure. Although I don't see why I wouldn't, if they have a PGP key listed? (I guess there is some question over whether the listed key is actually them?) But my point is that, well, email is a good way to do that, and Signal isn't, so I'm going to use email rather than Signal.

Honestly, I wouldn't focus on (3), because as I see it, if you can replicate the feel of email, things like (1)-(2), so that it can replace email in contexts without (3), then (3) will just come naturally as it slowly replaces email.

Edit: All this is assuming it isn't tied to a phone number or something similar, of course!

How does one list a public PGP key, is there a verified central listing service?

One of the major features of PGP is that you don't have to rely on -- trust -- a "verified central listing service".

The "Web of Trust" [0] fills that role:

> As time goes on, you will accumulate keys from other people that you may want to designate as trusted introducers. Everyone else will each choose their own trusted introducers. And everyone will gradually accumulate and distribute with their key a collection of certifying signatures from other people, with the expectation that anyone receiving it will trust at least one or two of the signatures. This will cause the emergence of a decentralized fault-tolerant web of confidence for all public keys.

[0]: https://en.wikipedia.org/wiki/Web_of_trust

In practice a web of trust is only trustworthy 1 degree out from you. Just because you trust someone doesn't mean you should trust the people they trust. The web of trust is a difficult to use misfeature. In theory it's great. In practice it's unusable.

The problem is nobody uses this right

If you control your own domain, Web Key Directory [0] is a good option too.

[0]: https://wiki.gnupg.org/WKD

You can put it on your website or anywhere really. Some people use keybase.io for this.

You can't put it anywhere really, otherwise anyone could tie their key to your identity. Keybase.io is a good solution.

You're technically correct of course.

I guess they mean putting the key on a webpage or using Web Key Directory or a centralized service such as https://keys.openpgp.org

There used to be a bunch of those in the 90s and it was a mess.

"mostly cold" email to a security email address listed on a website is probably the only use I've ever had to PGP encrypting email to someone I hadn't already been communicating with... (But I can imagine other scenarios. I bet Snowden's cold email to Greenwald was encrypted...)

Trust on first use is not an uncommon security practice. Imperfect but in many times the best alternative, and a good solution while we wait for a replacement to gain traction.

Why wouldn't it be?

3 is incorrect for encrypted email: you cannot email someone unaware without their consent and expect them to willingly and ably participate in decrypting.

Why not? I mean, that's what publicly listing your public key is for, right?

Nope. I have at least three public never-expiring keys that I am unable revoke and that remain listed as valid because the keyservers don’t occasionally revalidate proof of ability to decrypt.

Well, the keyservers also don't validate if it's your key instead of a key submitted by me with your email address on it, so for any secure messaging you need some other, authenticated channel for the potential recipient to assert which is their key.

Oh, that's a good point. Heh, I have one of those, too, which even caused a problem once[0]. I wouldn't expect people to find it first, though, because I wouldn't expect people to go to a keyserver first; I'd expect them to find my key on one of the places I have it listed on the web. I've never tried blindly entering someone's email address into a keyserver and just hoping they have a key; I've only sent PGP-encrypted email to people who list their keys on the web.

[0]How it caused a problem: I added an email address to my public key (or maybe it expired or something, I forget), and asked people to refresh their copy of my key. One person instead downloaded it entirely anew from a keyserver and got the old one. Oops. (Admittedly I didn't explicitly use the word "refresh".) Anyway yeah -- though this problem had happened to me, it hadn't ocurred to me that it might be common; maybe this is more of a problem than I thought...

GPG chooses the key to use based on alphanumeric ordering of the short key ID, last time I experimented anyways. Best of luck overcoming that!

>never-expiring keys

While this is bad, the keyserver issue is still valid.

I don't get what's insecure about normal unencrypted email. It's sent over https, isn't it? It's not like I can read your emails unless I break into Google's servers, no? And even if I do, they probably aren't even stored in plaintext.

I just don't get the encrypted email obsession. It's impossible for an individual to withstand a targetted cyber attack so it seems pointless to go above and beyond to ultra encrypt every little thing

> It's not like I can read your emails unless I break into Google's servers, no?

Well, first of all, "breaking in" isn't the only way someone might get access to data on Google's servers. There are such thing's as subpoenas, not to mention that it is possible a Google employee might abuse access to servers. And then I would be _very_ surprised if google doesn't use the content of your emails for advertisement and tracking purposes.

Furthermore, unless both parties are using gmail, the email will be stored at least temporarily on other mail servers, which may be less secure (and you might not even know who controls them).

> And then I would be _very_ surprised if google doesn't use the content of your emails for advertisement and tracking purposes.

That would go against their own privacy policy. But they are one change away from doing it.

Really? When gmail came out they were explicitly up front about using the content of the email to deliver targeted ads. Has that changed?

There are a lot of misinformations around, and the Google haters crowd has plenty of pitchforks.


> Google does not use keywords or messages in your inbox to show you ads. Nobody reads your email in order to show you ads.

It's not misinformation. Gmail only stopped scanning messages for ad targeting in 2017:


It's just the passage of time that has rendered it misinformation. Until not so long ago, Gmail messages were actually scanned for ads -- IIRC, Google was actually pretty upfront about it when they first launched Gmail, and explained that it's how they could afford to give users 1G of inbox space in a day and age where 25 MB was pretty good and 100 MB was pretty hard to get for free.

They eventually stopped, although the phrasing of the privacy policy is vague enough that, as wodenokoto mentions above, I wouldn't be surprised if email messages were still scanned for some advertising purposes. The fragment on the page you link to is only about ads shown in Gmail, doesn't exclude using keywords and messages for tracking, classifying etc. (it just doesn't use them "to show you ads") and doesn't actually exclude using programs to process messages (i.e. you can still reasonably say that "nobody reads" messages if you just feed them into a program). It's also not very obvious if "messages in your inbox" also includes messages you send.

FWIW, I think the policy is deliberately open-ended so as to be future-proof, but I doubt emails are an important source of advertising data today, so I think it's likely that Google doesn't rely on it that much anymore. Most sources of legitimate (i.e. non-spam email) that are useful for advertising -- e.g. online shops and the like -- already track you and show you ads, and Google is already deeply embedded there. Millions and millions of personal accounts are an useful strategic asset to have but I think there are better sources of data.

> I doubt emails are an important source of advertising data today, so I think it's likely that Google doesn't rely on it that much anymore.

I completely disagree with the first part of your assertion. Email is still the main medium for all organizations, especially private companies taking your money for something, to communicate with you with plenty of details. Be it ordering some product online or booking a flight or other travel ticket or ordering a service or anything else.

The richness and amount of information conveyed over email pales in comparison to SMS notifications. So email is still a treasure trove of what people are doing and have done.

> Be it ordering some product online or booking a flight or other travel ticket or ordering a service

That's true, but all these places already track the living hell out of you. Even the newsletters they send over email have tracking information. By the time they've sent you an email after your first purchase, they know everything they need to show you relevant ads (in fact, that's probably why you made the first purchase...). I doubt bulk analysis of emails can show anything that is not already known way before the emails got sent.

Depends on what they mean by messages? The whole raw message, incl. all headers? The text user sees?

Google can probably serve nice ads just based on metadata it gathers on the SMTP level, without even using the raw message. Someone mails his bank, maybe show some banking related e-mails, etc.

And it would still stay true to the proclamation on that page.

Are we sure about this? When I downloaded my Google data, I fished around and found information related to my Amazon purchase history and etcetera. The only possible way that I can think of that Google is able to get my purchase data is from my email.

That's not misinformation - it's just mildly dated information. Google stopped scanning emails for ads very recently.

It was never a conspiracy theory. Google used to be very open about the fact that they were scanning emails.

Yes when.

What changed was gsuit. They want to move away from the “we are harvesting your emails” image, in order to better sell paid accounts

“Consumer Gmail content will not be used or scanned for any ads personalization after this change.”


If you pay Google for email they will not use the content for ads. Free accounts I think they still do.

In modern practice, email is sent over TLS sockets already. Any good email client should prohibit you from using SMTP, POP, or IMAP with TLS, and for the past few years, even the MX-MX transfers in the backend have started to become protected (albeit mostly opportunistically at this point, I believe) with TLS.

So the only people who can read email are you, your counterparty, your ESP, and your counterparty's ESP, assuming the email providers are following good practice.

This is an excellent explanation overall, I do however think that it's important to note that opportunistic STARTTLS is vulnerable to downgrade attacks by mitm. Since this would have to be a mitm of e.g. Gmail it's not trivial by any means, but neither is it completely out of reach (see for example the periodic rerouting of the internet caused by odd BGP advertisements).

One further note is that you can know post-hoc if an email was delivered to Gmail via TLS by the presence or absence of a red lock in the Gmail app or web UI.

> STARTTLS is vulnerable to downgrade attacks by mitm.

Not only that, but intentionally downgrading STARTTLS commands has as times been the default configuration for some Cisco routing gear.

(Buy me a beer one day and I might tell you about the time I charged one of Australia's big four banks about $70k to debug that in a router on their internal network that nobody there knew existed...)

Which routers did that? I'm well aware that the ASA firewalls did it ("ESMTP inspection") -- I've disabled that dozens of times -- but I've never heard of a router that did it (by default).

This is often explained in a needlessly confusing way.

STARTTLS isn't a problem, the problem is if a user checks a box (or moral equivalent) saying it's optional obviously bad guys will opt for "No". If your client is set to _require_ STARTTLS, then an adversary blocking STARTTLS has the same effect as if they just blocked the IP address of the mail server, you get an error and it doesn't work.

There's no reason to use "opportunistic" STARTTLS for your own mail servers (ie IMAP or SMTP to an outbound relay). Nobody should be configuring their own gear, or corporate gear, to just let somebody else decide whether it uses encryption.

Opportunistic encryption still has a place in server-to-server SMTP relaying, but if you're choosing options like "if available" in a desktop/phone mail client that's wrong.

It may be better than nothing, but it's far from a sure thing: If you can BGP announce an IP, you can get a certificate from letsencrypt.

This is a trivial attack vector not just for state-actors, but also stupid kids: in the early 2000s, I announced Microsoft's AS from my own network (AS21863) to see what would happen and got a significant amount of microsoft.com's traffic. There was no security, and there still isn't: Most multihomed sites that change links frequently inevitably find themselves unfiltered either through accident or misplaced trust.

For this reason, TLS without key-pinning (even with IP filtering, as is popular with a lot of banks/enterprise) is far less secure than people realise, and on unattended links (server-to-server relaying) it offers only some casual confidentiality (since detection is unlikely) at best.

If you use MTA-STS, you have a good chance of detecting this kind of attack though. I've not seen anyone use a long policy on a distant but popular network to require someone BGP hijack two big networks to beat it, but I suspect such a disruption would be felt across the Internet.

Letsencrypt supposedly has deployed a system that makes the connections from different locations around the world to make this attack more difficult and also you can’t get a letsencrypt certificate for gmail.com or microsoft.com (or Gmail.* or microsoft.* for that matter), there’s a block list for high value targets.

I would hope letsencrypt has a number of heuristic safeguards, but I can guarantee they do not make connections from multiple routing paths: My ad server registers a certificate during the SNI hello (but before the certificate is presented), and I get a certificate after a single ping.

> I do however think that it's important to note that opportunistic STARTTLS is vulnerable to downgrade attacks by mitm

See SMTP MTA Strict Transport Security (MTA-STS):

* https://tools.ietf.org/html/rfc8461

And STARTTLS Everywhere:

* https://starttls-everywhere.org/

MTA-STS prevents downgrade attacks.

> Any good email client should prohibit you from

Most e-mail servers required a login (regardless of fetching or sending), and it would take a real incompetent sysadmin to allow that to happen in the clear.

> It's sent over https, isn't it?

That's actually more complicated than that.

If you're using a web mail, your connection to the mail provider most likely uses HTTPS. That is, HTTP over TLS. When the mail is sent, it depends whether the recipient uses the same provider or not. If it's the same provider, well, protocols are irrelevant. If not, it will usually be SMTP over TLS (minus any potential problems with STARTTLS).

The main problem with that is that the mail is not encrypted on the various servers it goes through. Only the server-to-server connections are encrypted. So your provider can access your email, and so can the recipient's. When that provider's business model is reading your emails so it can send you targeted adds, this is less than great. (Yes, Google reads your emails. They try to reassure you by telling you their employees don't read it, but the fact the process is automated actually makes it worse.)

Also, it might surprise some people just how many servers an email travels through to get to its destination. I just grabbed a random mail from a mailing list I'm on (generally a worst case scenario) and it had 7 Received headers. Every mail server is supposed to add a Received header when the mail passes through but there's no way to enforce that, so all I can really say is that mail probably passed through at least 7 servers on it's way to my inbox.

Each one of those hops may or may not have talked TLS to the next hop. Each one probably wrote the mail out to a disk based mail queue in plaintext. There is nothing preventing any of those 7 servers from keeping around that mail even though they forwarded it on. There is nothing preventing them from indexing the mail for spam or marketing purposes.

Any sysadmin can read your email, in general. There's no holistic "this email can't be read by anyone other than the recipient" as a solution, which is what a lot of us are aiming for. Things like protonmail and tutanota get really close, but they're proprietary solutions and don't work for "the many" (such as yourself) who use a hosted solution such as Gmail, who seem to have no interest in providing an open solution.

I don't want my emails to be readable by Google, yet they will when people I communicate with are using Gmail.

Mail doesn't use HTTPS. But even if TLS is enabled, you don't know that all the time.

The old ‘security is hard so let’s not do it’ argument. Emails are not properly encrypted in transit and are available for access at the provider if a court decides to grant a warrant. That might not be enough protection for everyone.

It is possible for a determined individual to withstand targeted attacks if he’s careful and willing to make the sacrifices that come with the territory.

Are you assuming everyone uses Gmail or do you not know how SMTP works?

You might be surprised just how many mail providers support STARTTLS for email, at least opportunistically.


And opportunistic STARTTLS is vulnerable to downgrade attacks by MITM.

the problems with email are that, no matter how sure you are that the connection between you and your mail server, and your local and server storage, are secure, the parties you may be interacting with are not. And then, as is talked about in the article, your recipient forwards the mail as plaintext...

And downgrade attacks are mitigated by MTA-STS: https://www.hardenize.com/blog/mta-sts

Not supported by everyone just yet since this is a new standard, but Gmail at least supports it.

There's a few places where this engages in goalpost shifting that seems less than helpful even though I end up agreeing with the general thrust. Let's focus on one:

> Put a Signal number on your security page to receive bug bounty reports, not a PGP key.

We can reasonably assume in 2019 that this "security page" is from an HTTPS web site, so it's reasonably safe against tampering, but a "Signal number" is just a phone number, something bad guys can definitely intercept if it's worth money to them, whereas a PGP key is just a public key and so you can't "intercept" it at all.

Now, Signal doesn't pretend this can't happen. It isn't a vulnerability in Signal, it's just a mistaken use case, this is not what Signal is for, go ask Moxie, "Hey Moxie, should I be giving out Signal numbers to secure tip-offs from random people so that nobody can intercept them?".

[ Somebody might think "Aha, they meant a _Safety number_ not a Signal number, that fixes everything right?". Bzzt. Signal's Safety Numbers are per-conversation, you can upload one to a web page if you want, and I can even think of really marginal scenarios where that's useful, but it doesn't provide a way to replace PGP's public keys ]

Somebody _could_ build a tool like Signal that had a persistent global public identity you can publish like a PGP key, but that is not what Signal is today.

The safety number is only partly per-conversation. If you compare safety numbers of different conversations, you'll discover that one half of them is always the same (which half that is changes depending on the conversation). This part is the fingerprint of your personal key.

The Signal blog states that "we designed the safety number format to be a sorted concatenation of two 30-digit individual numeric fingerprints." [1]

The way I understand it, you could simply share your part of the number on your website, but Moxie recommends against it, since this fingerprint changes between reinstalls.

[1] https://signal.org/blog/safety-number-updates/

Ah! Yes, I see. You'd need to figure out which is "your" half, which the application as it exists today doesn't help you to do since that's not what they're going for. The person initiating would need to send something to establish a conversation, like "Er, hi?" and then once that conversation exists they can verify the number shown on your web page matches half of the displayed safety number as expected and actually proceed.

It's clunky, but less so than I feared. I can actually imagine a person doing this. I mean, they won't, but like PGP this is something a person _could_ do if they were motivated and competent.

> persistent global public identity

Certificate Transparency could be reused/abused to host it. If, for example, you issued a cert for name <key>.contact.example.com and the tooling would check CT logs this could be a very powerful directory of contacts. Using CT monitors you could see if/when someone tampers with your domain name contact keys.

Mozilla is planning something similar for signing software: https://wiki.mozilla.org/Security/Binary_Transparency

This is basically the keybase.io log system. Although they too rely on PGP currently

Minus the network of independent logs as from what I remember only keybase.io runs their own log system. Although they do timestamp their Merkle tree roots into Bitcoin.

> > Put a Signal number on your security page to receive bug bounty reports, not a PGP key.

Does anyone actually do this? Even Signal developers themselves don't! (see https://support.signal.org/hc/en-us/articles/360007320791-Ho...). Instead there is a plain old email address where you are supposed to send your Signal number so that you can chat.

We manage bug bounties for a bunch of different startups, and I can count on zero fingers the number of times I've had to use PGP in the past year for that. In practice, people just send bugs with plain 'ol email.

I used to get about 1 or 2 PGP-encrypted emails with security bug reports per year when I managed this for my employer. There's a dedicated team that receives security reports now, with email feeding into an automated ticketing system with automatic acknowledgements, reminders, spam filters, PagerDuty alerts, etc. There's a huge amount of tooling and workflow built around email, with a lot of integrations into all kinds of enterprise software. Often the only sane way to trigger all this stuff is to send an email.

So I think the result of removing PGP will be even more plain 'ol email than anything else.

It sounds like you’re saying “and that’s why GPG is good”, but I read that as an argument why there’s a very high probability that one of those things is going to spill the beans, plaintext, in an email anyway.

No, I'm not defending PGP. Even without the automation, every PGP-encrypted email almost certainly results in a bunch of internal plaintext emails between employees that could easily accidentally cc the wrong person, etc. I'm just pointing out that the chances of replacing PGP with something genuinely secure for these kinds of use-cases are close to zero.

So even Latacora-advised startups use plain old email for bug bounties. Why then does the blog post recommend using Signal for that?

Because Signal would be better than the PGP theater. In practice, though, it doesn't matter; people are just going to use plain old email no matter what. They're not going to encrypt their findings to you.

Anecdote about said startups: in 2y of the one big bounty that did have a PGP key, we got one PGPd report, and it was “session takeover”: if I copy the cookie out of Burp and into a new Incognito session, I will be logged in. Bounty plz?

We also got super clever reports on that same bounty program. They just sent email.

Maybe all PGP users are morons, that's beside the point. My point is that if someone recommends something but doesn't follow their own recommendation, it is most likely that the recommendation is not well thought-out and can be ignored. In this case the recommendation to use Signal looks more like a refutation of the point brought up by PGP advocates and not something that anyone would actually do.

That’s a fair criticism and I will happily admit that’s what it should say: that all PGP users are morons. (Just kidding. You’re right re: bug bounty advice.)

> PGP theater

Hmm I don't see it as theater if you are unable to intercept and decrypt my message. Or forge my signature, etc.

If the people from Signal start a conversation with you on the number you emailed, how do you know it’s actually them? Couldn’t it be a third party who intercepted your email?

You need to check their “safety number”, and now we’re back to the same idea as with PGP with web of trust and key sharing parties.

At some point you still need some kind of pub-key identity check if you don’t want to accidentally report your vulnerability to PRC instead.

Right, that's insecure. Maybe they should, you know, put a PGP key on their website? :)

Keybase.io supports this usecase

Whent talking about alternatives, Signal and WhatsApp get mentioned because they're easy to use. They are. Signal is pretty secure. WhatsApp probably is as well but we can't be sure. That is, until it isn't anymore.

WhatsApp already has a key extraction protocol built right in for its Web interface. Signal has a web (Electron) interface as well, and a shitty one at that, where the messages also get decrypted. For WhatsApp, this means you're one line of code away from Facebook extracting your private keys.

Signal is different, in that they're not a for profit company. However, they've shown in the past that they are under no circumstances willing to allow support of any unofficial client or federate with another. In fact, they've taken steps against alternative clients on the past, making it clear that only their client is allowed to use the signal system. The moment the signal servers go out, Signal becomes unusable. This also leaves signal in the same position as WhatsApp, where we are dependent on one person compiling the app and publishing it on whatever app store you prefer. If signal has any Australian contributors and their code review fails sufficiently this means you're basically toast the moment the Australian government gets annoyed at a particular signal user enough.

Very few real alternatives to PGP exist. PGP is not just a message encryption format, it's a _federated_ message encryption format. There are very few actual federated message standards that come close to the features PGP supports. There's S/MIME, but that's only available after paying an expensive company for it to be of any use because it's validated as a normal TLS certificate and the free vert providers don't do S/MIME.

If all of these "real cryptographers" disagreeing with PGP's design would design a new system that can be used the same way PGP is used, I'm sure we'd see that getting some good usage figures quite quickly. But all alternatives seem to focus on either signing OR encrypting OR the latest secure messaging app instead of a PGP replacement.

>WhatsApp already has a key extraction protocol built right in for its Web interface.

I don't believe this is correct. WhatsApp (and Signal AFAIK) web works by decrypting the original message on your phone, re-encrypting it with a different key that is shared with your web interface (this is what is being shared via the QR code when connecting to WhatsApp Web), sending it to the web client, and having your web client use the second key to decrypt. This is why your phone must continue to be powered on/connected to the network for the web service to work. The original key is never "extracted", and AFAIK can't be extracted by normal means.

There are a few apps that attempt to exploit a few security vulnerabilities to recreate your key for you if you lose it and need to access backups, but that isn't the same as what you're describing.

WhatsApp always requires your phone to be around, whereas Signal needs it only when you link it. After linking, the desktop client is independent of the phone (being online or in your vicinity or the number being in your possession).

Yep, you're right. I just looked more into it and WhatsApp and Signal operate differently. WhatsApp works as I described, but Signal actually does share the original key between all devices through some sort of key sharing mechanism.

Fair enough, I suppose it's more of a plaintext extraction protocol.

Still, it would take just one decision by Facebook to completely disable e2e or add an actual key extraction method to WhatsApp and there's nothing you can do about it. While WhatsApp is the most secure of all conventional chat apps, it's certainly not a replacement for PGP in most use cases.

Signal-Desktop works even when your phone is turned off.

I think the real problem is that nobody has ever created a decent PKI, and I doubt a sufficiently secure PKI is even possible.

CAs require you to trust people that aren’t supposed to be party to the communication (trust both not to be hostile, and not to be insecure themselves).

All other forms of PKI offer entirely impractical authentication mechanisms. With signal and the like, your options are

1) Verify keys by being in the same room as the other party before communication, and after every key rotation

2) Just hope that the keys are genuine...

The only thing that you can trust is that the party you’re communicating with is one of potentially many holders of the correct key.

I would regard the Web PKI as the only decent global public PKI, but sure, whatever.

You don't seem to have understood what's going on in Signal. Ordinary key rotations, which happen automatically, do not change the verified status. What can happen is that another participant changes phone or wipes it, and so obviously trust can't survive that change.

The problem isn't that somebody else may know the correct key, the Double Ratchet takes care of that. The problem is that a Man-in-the-middle is possible. Alice thinks Mallory is Bob, and Bob thinks Mallory is Alice. Mallory can pass messages back and forth seamlessly, reading everything. Only actually verifying can prevent this.

You don't verify the encryption keys, that's useless because those change constantly, the verification compares the long term identity value ("Safety Number") for the conversation between two parties, which will be distinct for every such conversation. Mallory can't fake this, so if Alice and Bob do an in person verification step Mallory can't be in their conversation.

The implementation details have some UX benefits, but all they do is kick the can down the road, not solve the problem. You need a secure channel to authenticate the keys (or “safety numbers”, or whatever you want to call them). This can only practically be done face-to-face (or by getting somebody you trust to do it face to face - to act as if they were a CA). You need to do this prior to first communication, and additionally every time somebody loses their key material.

Some people will be motivated enough to do this, most won’t, and this absolutely can’t scale.

All known PKI systems are either impractical, or require a level of trust that undermines the system entirely. You can say your threat model doesn’t require that much security, but in that case it probably doesn’t require a PKI either.

Signal PKI doesn't really work for me, conceptually. I mean, Signal is great work, but the approach to key management and federation seems like it undermines the regular security of the approach.

The problem is the key servers are run by the same people who control the app. This helps if the key server specifically gets compromised and the target is verifying, but for many attacks people worry about it's actually not the key servers specifically that get popped, it's an employee laptop or the employee themselves via subpoena, policy change etc. And for those cases nothing stops the app itself being changed to show you a false safety number, possibly by Apple without the app vendor even knowing.

So we end up with a rather curious and fragile threat model that only really helps in the case of a classical buffer overflow or logic error that grants an adversary the ability to edit keys and not much else. It's very far from "you don't have to trust the providers of Signal" which is what people tend to think the threat model is.

And honestly, a technique that combats very specific kinds of infrastructure compromise are too low level IMO to bother advertising to users. The big tech firms have all sorts of interesting security techniques in place to block very specific kinds of attacks on servers but they generally don't advertise them as primary features. If you have to trust the service provider, and with both Signal and WhatsApp you do, then are you really getting much more than with bog standard TLS? After all forward secrecy achieves nothing if the router provider is diligently deleting messages after forwarding them to the receiving device - the feature only has value if you assume the provider is recording all messages to disk and lying about it, in the hope of one day being able to break the encryption of ... their own app. Hmmm.

with signal I believe you can have already trusted contacts vouch for new contacts

You're thinking of PGP web of trust, Signal doesn't have that

you can have contacts send you another contact. There is no way to set up a server of public identities, but you should be able to share your own contacts.

Signal won't validate the session key via that mechanism, each pair of communicating users have to do that themselves

TIL; does this also mean that you can "fake" forwarded messages?

> But all alternatives seem to focus on either signing OR encrypting OR the latest secure messaging app instead of a PGP replacement.

OP mentions exactly this point in the "The Answers" (https://latacora.micro.blog/2019/07/16/the-pgp-problem.html#...) section.

I'm a little confused as to why you mention Signal and WhatsApp but not Telegram?

Most cryptographers do not see Telegram as a secure encrypted protocol. This is for two reasons: the first one is that Telegram doesn't do end-to-end encryption by default (and if you enable it, functionality is limited). And secondly, they roll their own cryptographic protocol.

Telegrams crypto is based on their own, contested protocol and is disabled by default. Telling someone to use that for secure communications is difficult because you also need to remind people to turn on encrypted communications.

Furthermore, signal and WhatsApp do e2e in group chats where telegram doesn't.

Dont get me wrong, I use Telegram daily (it's desktop clients far outperform any of its competitors), but it's not as secure as WhatsApp or Signal.

I'd classify Telegram as "maybe secure" but I wouldn't recommend it to people depending on the security of their messenger application.

Telegram invented it's own crypto, without an audit it's untrustworthy. There's only Signal and Keybase that has been audited, so Whatsapp should be excluded from the list of trustworthy IM apps as well.

Whatsapp uses the exact same technology as Signal. If you consider that Signal is fine based on an audit of what is clearly an older version (do audits come out every day with new Signal releases? No, so the code you're running wasn't covered by the audit) then Whatsapp is fine based on being the same protocols with different branding.

An audited piece of software with relatively minor changes is definitely far more trustworthy than a piece of software that is much much more different. Whatsapp is garbage compared to Signal.

"without an audit"

It would be more accurate to say that they have failed every attempted audit of the protocol.

I'd love to read more about this, could you please provide a few links?

Just FYI, Actalis will mint you a free S/MIME cert.


Where they generate your private key :(. I rather let them sign my own key

I understand that there are better tools for encryption, but is there anything that replaces the identity management of PGP? Having a standard format for sharing identities is necessary in my opinion. If I have a friend (with whom I already exchanged keys) refer me to some third friend, it would be nice if he can just send me the identity. Sending me the signal fingerprint isn't a solution for two reasons:

- I don't want to be manually comparing hashes in 2019

- it locks me into signal, I wont be able to verify a git commit from that person as an example

Is there a system that solves this? Keybase is trying but also builds on PGP, we can use S/MIME which relies on CAs but is not better than PGP. Anything else?

Keybase builds a lot on top of saltpack, which works like a saner PGP: https://saltpack.org

The underlying cryptography is NaCl, which is referenced in the original post.

I don’t get it. How does Saltpack solve the issue of identity management?

saltpack does not solve identity management itself, it is merely the cryptography and the physical format. Keybase, however, is all about identity: https://keybase.io.

I think that currently Keybase.io is the only thing trying to be universal, with their transparency log plus links to external profiles along with signed attestations for them.

But even that's still not quite what I'm looking for. There's no straightforward way to link arbitary protocol accounts / identities to it, outside of linking plain URL:s.

We need something a bit smarter than keybase that would actually allow you to maintain a single personal identifier across multiple protocols.

Also, as I looked a little bit into Keybase I learned that they don't support any protocols that don't have public profile-like pages. So a pure messenger wouldn't be supported by Keybase.

They're opening up the protocol so that any website can provide the authentication, and Mastodon already implements it: if you have an account on Mastodon, you can have an additional proof on keybase.

See the blog post and the spec that details the changes to implement: https://keybase.io/blog/keybase-proofs-for-mastodon-and-ever...

I presume a pure IM system would have to implement some web gateway at the server level.

The CA system is strictly better than PGP for identity management in every respect.

People often think it must be the opposite but this is essentially emotional reasoning: the Web of Trust feels decentralised, social, "webby" un"corporate", free, etc. All things that appeal to hobbyist geeks with socialist or libertarian leanings, who see encryption primarily through the activist lens of fighting governments / existing social power structures.

But there's nothing secure about the WoT. As the post points out, the entire thing is theatre. Effectively the WoT converts every PGP user into a certificate authority, but they can't hope to even begin to match the competence of even not very competent WebTrust audited CAs. Basic things all CAs are required to do, like use hardware security modules, don't apply in the WoT, where users routinely do unsafe things like use their private key from laptops that run all kinds of random software pulled from the net, or carry their private keys through airports, or accept an email "From" header as a proof of identity.

I wrote about this a long time ago here:


In an otherwise good write-up, I disagreed with this line:

" There is no scope for difference between a “big corporate” CA and a “small politically active” CA because the work they do is so mechanical, auditable and predictable."

There is room for a politically-active CA like there is for anything else. In each market, there's players that get business for doing better things for privacy, being eco-friendly, being more inclusive, etc. Things that get business from vote with your wallet types. My idea, inspired by Praxis doing Mondex's CA, was a non-profit or public-benefit company that had built into its charter and legal agreements many protections for the customers in a country without secret laws/courts like U.S. Patriot Act. The CA would also be mandated to use high-security approaches for everything it did instead of just HSM's. They might also provide services like digital notary.

In short, I can imagine more trustworthy and innovative CA's being made. I'd easily pay one like that over the rest. I'm sure there's some number of people and businesses out there that think the same way. I wouldn't try it as main business, though, since market is too cut-throat. My idea was a company like Mozilla would try it to see what happens. Let's Encrypt confirmed the non-profit, public-benefit part being feasible.

> But there's nothing secure about the WoT.

I haven't read your blog, but this sentence unfairly paints WoT with PGP/GPG's problems.

It's completely reasonable to have a WoT that operates correctly when at least a single participant isn't completely incompetent. That's how git works.

I haven't looked closely but I'd be willing to speculate that PGP is to WoT what C++ is to fast compile times.

Just to drive the point home, compare:

* the amount of time some pedants waste at a PGP "key party"

* the time it takes me to accept a merge request from someone who made a commit in the gitlab UI using Internet Explorer on their grandparents' malware-laden Dell desktop

Both examples leverage WoT.

Edit: hehe I wrote PGP "key party" instead of "key signing party."

Are we talking about extended validation or domain validation?

With domain validation it is likely better to use dane in the context of email. The sender looks up the key and mx record and act accordingly, and for postfix there are plugins that already do it. Very few current users however.

I am not against CAs but even then, what tool do you use? Personal SSL certificates for signing? Except for s/mime I haven't seen this used anywhere

The CA system as set up today is a bit fragile and much too limited, though. If it was all we needed, everybody would be using S/MIME with signed certs.

We need something more expressive than the current CA system, where you can make the choice to define your own trusted roots.

You can always edit the trust store to add or remove certs your local computer trusts. That's easy, there are even GUI tools available to do it. Heck on MacOS there's even a GUI wizard to create a local CA from scratch!

Nobody does it because the hard part of being a CA isn't the protocol part, it's convincing everyone that you're going to do a good job of issuing certificates. The WoT just ignores that problem entirely - and it's ultimately a social issue.

But you can't trivially define your own scopes wherein each has their own independent set of trusted CA:s. That's part of what's missing. But default it's universal or per program.

Just look at every kind of umbrella organization out there like industry specific auditors with a scope limited to a field (medical, finance, food safety), or even hobby organizations with a parent organization auditing local chapters.

You don't go to the social security office to look up your neighbors phone number when you need to talk to them. The attributes people care about are often more local, more narrow.

People first go to local trust anchors to get information about things (and their software clients could then traverse various directories up to a root and back down, if necessary). I need my client to be able to understand an assertion from an entity far more personal to me than a distant CA. The CA:s are most useful in ephemeral connection, not long term ones.

This is what I mean when I say the CA system isn't expressive enough.

The missing answer in there is how to avoid PGP/GnuPG for commit signing. I've asked about this in another similar thread[0] but didn't get a hopeful answer.

Everytime I look at git's documentation GPG seems very entrenched in there, to a point that for things that matter I'd use signify on the side.

Is there a better way?

[0] https://news.ycombinator.com/item?id=20379501

It seems pretty clear that, with the current tools available, there is no way to do this (at least with git). There's nothing in principle difficult about it, just that (say) git+signify hasn't been implemented.

I'm getting the strong sense (see also my toplevel comment, and maybe someone will correct me and/or put me in my place) that there's an enormous disconnect between the open source + unix + hobbyist + CLI development communities, and the crypto community. The former set has almost no idea what the state of art in crypto is, and the latter (somewhat justifiably) has bigger fish to fry, like trying to make it so that non-command-line-using journalists have functional encryption that they can use.

I think this is a sociological problem, not a technical "using command-line tools makes Doing Crypto Right impossible".

Signing tags (or somewhat less usefully, commits) can be done the same way packages are signed. It might not be directly integrated with git, but it wouldn't be hard to make a good workflow.

The article mentions Signify/Minisign. [1]

[1] https://jedisct1.github.io/minisign/ as an PGP alternatie.

> It might not be directly integrated with git

That's the problem I see. I have signingkey in .gitconfig, together with [commit] gpgsign = true. This way, set & forget, all my commits are signed (it's my employer requirement, probably some "compliance" stuff). You can see it right away nicely displayed as "Verified" on github. I didn't know about GPG-s supposedly weak security until now, but always considered it not very convenient to use.

Ah, well if your employer mandates PGP signatures on every commit, that's that.

FWIW, the creator of git argues that signing of every commit is essentially pointless. [1] I agree.

[1] http://git.661346.n2.nabble.com/GPG-signing-for-git-commit-t...

So the suggested solution for more secure email is just to give up on the concept of email entirely? Anything that does not do perfect forward secrecy is just pointless so there is no point in trying to keep significant discussions to refer to later. We are expected to return to a sort of virtual pre-writing stage.

This is not really helpful. For all its shortcomings, PGP is pretty much all we have. If used in a straightforward way it actually can protect email from nation state level actors for a significant time. That's gotta count for something.

I know right - they are all recommending Signal, Wire, WhatsApp, etc., but these aren't alternatives. They are all centralized, controlled by a single entity even if the underlying protocols are open. And you're right, they are instant messaging - ie. alternatives to Messenger, Hangouts, etc.

We need a modern email replacement that is decentralized, federated, et al. Something that keeps all the modern cryptographers happy, while facilitating the same kind of long form conversations and federated self-hostability that email provides.

I think Matrix is getting there, but even that is still focused on instant messaging.

As I've mentioned in another thread, I think we're more likely to get there via the route of E2EE documents collaboration tools. Something that cleanly breaks away from the email model, and adds enough value to make the switch worth the effort.

Telling people to treat email as insecure and thus not use it for anything serious is terrible bad advice.

I am reminded of BGP (Border Gateway Protocol). Anyone who has even glanced at the RFC of BGP could write an essay of the horrible mess of compatibility, extensions, non-standard design of BGP. It also lack any security consideration. The problem is that it is the core infrastructure of the Internet.

Defining something as insecure with the implied statement that we should treat it as insecure is unhelpful advice in regard to critical infrastructure. People are going to use it, continue to use it for the unforeseeable future, and continue to treat it as secure. Imperfect security tools will be applied on top, imperfectly, but it will see continued used as long as it is the best tools we have in the circumstances. Email and BGP and a lot of other core infrastructure that is hopelessly insecure will continue to be used with the assumption that they can be made to be secure, until an actually replacement is made and people start to transition over (like how ipv6 is replacing ipv4 and we are going to deprecate ipv4 if you take a very long term view of it).

People that use email to convey sensitive messages will be putting themselves and others at risk, whether or not they use PGP, for the indefinite future. That's a simple statement of fact. I understand that you don't like that fact --- nobody does! --- but it remains true no matter how angry it makes you.

I would say that people that use any critical infrastructure not designed with security in mind is putting themselves and others at risk if they convey sensitive information. This is why plain text protocols should be considered insecure.

It would be great if we could replace the whole Internet with modern technology rather than relying on ancient systems like BGP and email.

> It would be great if we could replace the whole Internet with modern technology rather than relying on ancient systems like BGP and email

I've occasionally though of starting a long term project that could eventually do that assuming that politicians screw up things the way it looks like they going to do over the next few decades.

The idea is that a group of interested people would develop these new system with no requirement whatsoever to have backward compatibility or interoperability with the current systems.

Of course these new systems would not get widespread adoption. They'd probably only be used by the developers and a few others who are willing to essentially have two completely different systems in parallel: the new stuff for communications among themselves and the current stuff for everything else. That's fine. It means no pressure to compromise to get something out faster.

Lack of adoption is not a problem. That's where politicians come in. What we are counting on is that those idiots are going to manage to cause or fail to prevent some apocalyptic event(s) that will sufficiently destroy the current systems that when the survivors get around to rebuilding the Internet and communication infrastructure they are starting from a clean slate.

How do you write this after writing that previous comment, which says that what I just wrote is "terrible bad advice"?

Telling people to stop using the Internet because it is insecure is bad advice. It is extremely unrealistic, like telling people to stop using cars and trucks because driving kills people every year.

However suggesting that we should change things to eliminate the risk is good. We could eliminate car accidents completely if everyone went over the automatic driven cars that communicated as a mesh network. The Swedish "zero vision" could be achieved, maybe even with todays technology, but it would be a massive undertaking.

Replacing BGP would be a similar massive undertaking. Just switching away from ipv4 to ipv6 has so far taken 20 years and we have no date in sight when we can start deprecating ipv4. From what I have heard/seen, a lot of people are somewhat reluctant to issue backward incompatible replacements of core infrastructure because they look at ipv6 and fear that kind of process. Even seen some pessimistic talks that argue that it is impossible and the only way to achieve changes in core infrastructure is with incremental changes that are fully backward compatible. I am not really of the view but I do understand their fear.

My advice to people is not to abandon email, even if I doubt much people would heed to the warning that email is unsafe for government, business, people and their family. People will risk it regardless. Thus I focus on what may help, imperfect as those may be. In the past that was PGP in the form of enigma mail plugin. Today I am keeping an eye on the new pretty Easy privacy which hopefully can outsource the security to a library that attempts optimistic encryption when ever possible.

The PGP team openly and enthusiastically discusses how they've advised dissidents in places like Venezuela, about which just recently the NYT ran an expose of death squads sponsored by the Maduro administration. What they're telling dissidents to do has to work. It demonstrably doesn't. Pretending otherwise, because everyone else does it, is malpractice. I don't understand where the wiggle room people are finding on this issue is.

The only advice you can semi-safely give to dissidents that face state organized death squads is to hide and get new identities, and never ever reveal the old ones to anyone.

Signal will not make people immune to death squads, nor will any other technology. It was not that long time ago that members of Anonymous went after the cartel and we got pictures of people tortured and killed. It only take one trusted person who know a dissidents real identity or family or friends or community for things to get very ugly very fast.

If the PGP team promised security against state organized death squads then that's their fault. Pretending that technology will protect you against that kind of threat can be a very costly mistake.

I would suggest using restic over TarSnap for encrypted backups -- it gives you more flexibility with where your backups will be stored since TarSnap is pretty integrated with Colin Percival's online service and is also unfortunately not free software. But it's also an as-simple-as-possible cryptosystem. Filippo did a quick lookthrough and said that it seemed sane from a crypto perspective[1].

[1]: https://blog.filippo.io/restic-cryptography/

I'm using Restic, and it mostly works. Unfortunately, it uses an absurd amount of memory, proportional to the size of your backup store. So, don't use it to back up a small VPS.

If you do, "export GOGC=20" can help a little, but it'll still use a lot of memory.

Restic is also fine.

I wish that restic supported asymmetric keys. I'm uncomfortable storing the key alongside the backup tool, even if it just gets injected at runtime. If a nefarious party gets the key all my backups from that key are vulnerable.

I suspect that it's probably hard to add that functionality because you can't do the deduplication without decrypting the prior backups (or at least an index). That would also explain the memory usage JoshTriplett mentions.

Any opinions about rclone?

It seems to be fine for my mediocre backup needs

rclone is fine (in fact I use rclone with restic to synchronise my restic backup repository on BackBlaze B2) but it doesn't encrypt or deduplicate your backups -- it's just a synchronisation tool like rsync.

What about its 'crypt' encryption backend?

Afaik it uses scrypt which was designed for tarsnap

Ah, I wasn't aware it had an encryption backend. Just looking at the documentation I'm quite worried -- why on earth is there an "obfuscate" mode?

I would suggest using restic. It doesn't have any weird modes, it simply encrypts everything and there isn't any weird need to specify (for instance) that you want things like metadata or filenames encrypted. Also backups are deduplicated using content-defined-chunking -- giving you pretty massive storage savings. If you really need rclone since it supports some specific feature of your backend -- restic supports using rclone as a synchronisation backend (or you can do what I do, which is to batch-upload your local restic backup every day).

It's a choice between plaintext filenames or no plaintext filenames.

You might want to crypt your nudes, but still access normal pictures unencrypted through the provider's web interface.

I am a combination of honored and terrified that signify is the leading example for how to sign packages. The original goals were a bit more modest, mostly focused only on our needs. But that's probably what's most appealing about it.

I think you should get comfortable with that, because all the opinions I've collected seem to be converging on it as the answer; Frank Denis Frank-Denis-i-fying it seems to have cinched it. Until quantum computers break conventional cryptography, it would be downright weird to see someone design a new system with anything else.

We over at Sequoia-PGP, which gets a honorable mention by the OP, are not merely trying to create a new OpenPGP implementation, but to rethink the whole ecosystem.

For some of our thoughts on the recent certificate flooding problems, see https://sequoia-pgp.org/blog/2019/07/08/certificate-flooding...

The homepage has some meaningless marketing bullet points about the greatness of pgp itself. Where would I find the ways in which you rethink the whole ecosystem? It seems like Sequoia is just a library, not even a client. Wondering how this could change pgp by much if at all.

The PGP problem isn't going away until there is a stable alternative. Under 'The Answer' there are several different, domain specific tools, with their different features and UIs and limitations. And the general case (encrypting files, or really that should be 'encrypting data'), "this really is a problem". If I want to replace my use of GnuPG on production with the things on that list, I need to write my own encrypt/decrypt wrappers using libsodium and hope that future travellers can locate the tool or documentation so they can decrypt the data. So I stick with the only standard, GnuPG, despite acknowledging its problems.

What specific problem are you trying to solve with PGP? If it's "encrypting files", why are you encrypting those files? What's the end goal? I acknowledge that there are cases that boil down to "encrypt a file", but believe they are a lot narrower than people assume they are.

We encrypt files:

- For offsite backups (disaster recovery), mirroring object stores and filesystems to cheap cloud storage.

- For encrypting secrets needed for maintaining IT systems (eg. all those shared passwords we never seem to be able to get rid of)

- For encrypting sensitive documentation for transfer (email attachment, shared via filesystem, shared via HTTP, shared via pastebin even)

Despite the awful UI, GnuPG does all of that in a standard way. We have tested disaster recovery with no more instructions than 'the files are in this S3 bucket'.

And the same tool is also useful for other tasks too: - public key distribution (needs care to do it securely, but functional) - commit signing, signed tags - package signing (per Debian)

We could use custom or multiple tools for all this, but a single tool to learn is a big advantage.

I think all use cases boil down to 'encrypt and/or sign a file' for one of the stages. In the article, 'talking to people', 'sending files', 'encrypting backups' are all really just 'encrypt/sign a file' followed by transmission. And some sort of keyring management is needed for usability. A tool that can pull keys from a repository and encrypt and/or sign a file to a standard format could be used to build all sorts of higher level tools. I imagine it would be quite possible to build this on top of libsodium, and if it gained mindshare, replace uses of GnuPG.

> I think all use cases boil down to 'encrypt and/or sign a file' for one of the stages. In the article, 'talking to people', 'sending files', 'encrypting backups' are all really just 'encrypt/sign a file' followed by transmission.

But they aren't the same thing. That's the whole point the article is making. Yes, if all you have is a tool that does "encrypt+sign a file", then all crypto problems will look like "encrypt+sign a file" problems.

For backups, tools like restic provide deduplication and snapshots as well as key rotation (and restic works flawlessly with dumb S3-like storage). You can't do that with PGP without reimplementing restic on top of it. Same with TarSnap. For talking to people, you want perfect forward secrecy and (usually) deniability. PGP actively works against you on both fronts. For sending files, there are also other considerations that Wormhole handles for you (though to be honest I haven't used it in anger).

While you can "solve" these problems with one tool, the best way of solving them is to have separate tools. That's the point the article is making.

For securely talking to people, often you may want non-repudiation, which is the exact opposite of deniability and anonymity.

There are very different, incompatible needs for slightly different usecases.

Signal -- and all other OTR-like protocols -- have deniability (or if you prefer it has repudiation rather than non-repudiation). Neither conversation participant can prove to a third party that the other party said something in a conversation. Moxie wrote a blog post about this in 2013[1].

The only circumstance in which you want non-repudiation is if you are really sure that you are okay with the recipient of your message later posting cryptographic proof that you said something in a chat with them. I bet most people (if you asked them) would effectively never want that "feature" for private chats.

[1]: https://signal.org/blog/simplifying-otr-deniability/

Sure, you usually don't want that feature in private setting, but you almost always want that feature in a commercial setting, and lots of communication happens in that context.

E.g. vendor-customer helpdesk chat, internal workplace communication including "less internal" things like different subsidiaries of international companies, etc, etc. Half of financial world runs on Thompson Reuters messenger which is essentially glorified chat. What if your boss sends you a message "hey, do that risky thing right now" - do you want that (likely informal) means of communication to have deniability? Does the company want deniability in the app in which random middle-managers message their subordinates? It makes sense for companies to mandate that teams choose only communications platforms that support authentication and nonrepudiation.

As soon as money, any kind of disputes, and the smallest chance for future legal proceedings are involved, anonymity and deniability are flaws and not features - as I said above, superficially similar use cases can have opposing and incompatible requirements.

Even going back to the commonly discussed use case of Signal for journalism. Let's say a journalist interviews a whistleblower over a mobile messaging app - you'd want anonymity and deniability there. And five minutes later that same journalist asks a clarifying question to the official head of that agency, likely also using a mobile messaging app, possibly the same one. Do you want the answer of that official to have deniability, or do you want that journalist to be able to cryptographically prove that the official lied?

I think our main disagreement is the usage of the word "usually". There are chat systems that have non-repudiation that aren't PGP -- I don't think there's much more to elaborate.

For personal communications, usually people want deniability. For business-related communication, you might want non-repudiation.

Probably the point I'm trying to make is that for me the communication scenarios seem similar enough, and the line between business and consumer comunication is so blurry with people using private mobile devices in business and expecting the same set of tools to cover all their communication, that saying "oh, there's another tool that does it the opposite way" isn't really a sufficient answer - perhaps we need to treat it as essentially a "flag" in the same tool; this useraccount/chatgroup/etc is authenticated and doesn't have any deniability whatsoever, but in the same app over there I have a pseudonymous contact with a marker over it that's effectively anonymous with full deniability and OTR communications.

Magic Wormhole is neat, but the happy path is to use a third-party rendezvous server, which is susceptible to traffic analysis (I also wish that the codes had more than 16 bits of entropy, but that is part cargo-culting on my part).

Signal is also vulnerable to server-side traffic analysis, and is strangely keen on both demanding a selector (a phone number) for identity and on trusting Intel's 'secure' enclave (I strongly suspect that it's suborned).

One thing I do like about PGP is that it has been around awhile: I can still decrypt my old files & verify old signatures just fine, something I don't trust the flavour of the month to do.

I think that rather than a Swiss Army knife tool & protocol like PGP, we should have a suite of tools built around a common core, probably NaCL. That way we can have compatibility when we need it, but also aren't cramming square pegs into round holes.

Finally, the Web of Trust was a bright, shiny idea — and wrong. But Trust on First Use is also pretty bad, as is the CA ecosystem. We need something else, something decentralised and also secure — and I don't think that's impossible. I've had a few ideas, but they haven't panned out. Maybe someday.

> Finally, the Web of Trust was a bright, shiny idea — and wrong

Yeah, this whole part is some 90s cypherpunk way of modeling human relations, which has never mapped onto any real world relationships. As soon as people had real world digital identities outside of their gokukillerwolfninja666 logins, this didn't help.

CA ecosystem might be fundamentally flawed, but WoT was a complete failure. So PGP users end up trusting some key server which is probably sitting under someone's desk and has been owned by every serious intelligence service since forever.

Re: common core: isn't that already happening? age and signify are both based on those 2, and magic-wormhole arguably is too (though point addition is not always exposed, so SPAKE2 is a little harder to implement than signify).

Yeah-ish, but what I mean is an actual set of different tools (so, not one-size-fits-all) but all part of the same suite, rather than a bunch of different implementations of mostly the same idea — i.e., *BSD rather than Linux.

One of my numerous hobby projects is exactly that, but … I simply don't have enough Round Tuits.

OK, so you're saying something like:

  magic send <FILE>
  magic receive <CODE>
  magic encrypt <FILE>
  magic sign <FILE>
... that ideally all have NaCl at the base but are otherwise one binary that you have to remember?

The tricky one there is probably chat.

I think they meant something like the OpenSSL binary that obviously builds on the library and provides everything through the command line.

The problem is that the same thing for NaCl/libsodium would be lower level, and probably still not enough as exhibited in the article: a typical use case is not "I want to encrypt this file", it's "I want to send this file to that person such that no one else can read it" or "I want to send a message to that person such that no one else can read it, and if they can they shouldn't be able to read other messages from the same conversation". No cli tool can properly solve this, it has to be incorporated in the application or even protocol.

Incidentally, the real goal of magic-wormhole is to provide the initial secure introduction between two people's communication tools. Get your public key into my address book safely, and then all those other modes have something to work from. Keybase.io is kinda in the same direction except they're binding key material to de facto identity services (github, twitter, etc) rather than pairwise introduction.

Well, they don't all have to be one binary, but I'd like them to use consistent file formats, command-line arguments &c.

And yeah, chat is very much not like the rest — but I'd still like my chats to be somehow validated with my magic ID.

Couldn't we instead just cut the shared password in 2 (or use 2 passwords, it's the same), so we don't require point addition? I really don't feel like implementing key exchange in Edwards space just so I can have the complete addition law… unless maybe I don't need the law to be complete?

Here's how it could work (unless you tear it apart):

  Alice and Bob share two passwords out of band: Pa and Pb
  Alice and Bob generate two key pairs ka/KA, and kb/KB
  Alice sends KA, Bob sends KB
  Alice and Bob compute ss = HASH(DH(kb, KA) = DH(ka, KB))
  Alice responds to KB with Ha = HMAC(Pb, KB || ss)
  Bob   responds to KA with Hb = HMAC(Pa, KA || ss)
  Alice verifies Hb
  Bob   verifies Ha
  The session key is HASH(ss) or something
The main disadvantage is that to achieve the security of a true PAKE, passwords here must be twice as long. A 4 digit Pin number here would only have the security of two digits (1/100). You'd need 8 digits to get to 1/10,000 security. On the other hand, it's extremely simple, doesn't require point addition, and if there's any flaw you probably already have spotted it.

Re: magic-wormhole's 16 bits: I don't think you should be worried about that, because SPAKE2 will give you proof positive if the attacker attempts to guess. Are you saying 2*-16 success isn't good enough?

> Are you saying 2-16 success isn't good enough?

I really don't think it is, because it might be worthwhile for a particular sort of attacker, say one who runs the default rendezvous server: observe global activity, attempt to MitM every connexion for 30 seconds, then write up a spurious blog post about a 'network issue' or 'bug' or whatever which caused a brief outage. N:2^16 is okay* against targeted attacks, mostly (hence my 'cargo-culting' comment), but with a large enough N …

The nice thing about 1:2^128 is that you just don't have to care.

(magic-wormhole author here)

It's probably worth pointing out that the 2^-16 chance is per invocation of the protocol.. it's not an offline attack. So you'd have to be reeeealy patient to run it enough times to give the attacker a decent chance of success.

The best attack I can think of would be for me (or someone who's camped out on my rendezvous server) to make an MitM attempt on like one out of every 100 connections. Slow enough to avoid detection, but every once in a while maybe you get a success. Of course you don't get much control over whose connection you break (if you did, you'd be back in the detectable category again).

FWIW, some numbers. The rendezvous server that I run gets reports from clients about the success/failure of the key establishment phase. Over the last year, there were 85k sessions, of which 74% resulted in success, 22% in timeouts, and 2.5% in bad key-confirmation messages (meaning either a failed attack, or someone typoed the code). So in the worst case where every one of that last category was really a failed attack, there's roughly a 2130/2^16 = 3% chance that someone managed a single successful attack last year.

But I tried to make it easy to choose a different tradeoff. `alias wormhole-send=wormhole send --code-length=4` gets you to 2^-32 and gives codes like "4-absurd-almighty-aimless-amulet", which doesn't look too much harder to transcribe.

Yeah, I think that current default sails too close to the wind. 2^-16 chance on a stochastic MitM attack feels fine statistically - and then you think about that one person who it worked on. For them it wasn't 2^-16 it was binary, it didn't work.

They just used your "secure" file transfer mechanism and their data got stolen.

You're on the same side of this as the airline industry. The person driving a car understands intellectually that their drowsy half-attention to the road is statistically going to kill them, whereas travelling in coach on a mid-range two engine jet liner is not - but emotionally they consider driving to be fine because they're doing it, and air travel is trusting some random stranger. As a result, the industry need to make air travel so ludicrously safe that even though emotionally it still feels dangerous the passengers will put that aside.

2^-16 is intellectually defensible, but my argument above is that shouldn't be what you're going for. So that's why I'd suggest a longer code by default.

Okiedokie: you should just use wormhole then: wormhole receive 4-gossamer-hamlet-cellulose-suspense-fortitude-guidance-hydraulic-snowslide-equation-indulge-liberty-chisel-montana-blockade-burlington-quiver :-)

Isn't the code length also selectable?

Yep, the default python implementation lets you pick a code and generate a code with more than 2 words. I'm just cautious in telling people to twiddle knobs that don't need twiddling :-) But you can absolutely go up to 2^-64 or whatever if that makes you happy!

Really, all I did here was combine posts from Matthew Green, Filippo Valsorda, and George Tankersley into one post, and then talk to my partner LVH about it. So blame them.

(also 'pvg, who said i should write this, and it's been nagging at me ever since)

The elephant in the room is "what to do about email", and a significant part of the issues are related to the "encrypt email" use case: part of the metadata leakage, no forward secrecy, ...

The closest advice to this in the article would be "use Signal" which has various issues of its own, unrelated to crypto: it has Signal Foundation as a SPOF and its ID mechanism is outright wonky, as phone numbers are IDs that are location bound, hard to manage multiple for a person, hard to manage multiple persons per ID, hard to roll over.

To me that seems to be a much bigger issue than "encrypting files for purposes that aren't {all regular purposes}".

Is it wrong to use openssl to encrypt files?

0. (Only once) generate key pair id_rsa.pub.pem, id_rsa.pem

1. Generate random key

  openssl rand -base64 32 > key.bin
2. Encrypt key

  openssl rsautl -encrypt -inkey id_rsa.pub.pem -pubin -in key.bin -out key.bin.enc
3. Encrypt file using key

  openssl enc -aes-256-cbc -salt -in SECRET_FILE -out SECRET_FILE.enc -pass file:./key.bin
-- other side --

4. Decrypt key

  openssl rsautl -decrypt -inkey id_rsa.pem -in key.bin.enc -out key.bin 
5. Decrypt file

  openssl enc -d -aes-256-cbc -in SECRET_FILE.enc -out SECRET_FILE -pass file:./key.bin

from https://www.openssl.org/docs/man1.1.1/man1/openssl-enc.html

> The enc program does not support authenticated encryption modes like CCM and GCM, and will not support such modes in the future.

> For bulk encryption of data, whether using authenticated encryption modes or other modes, cms(1) is recommended, as it provides a standard data format and performs the needed key/iv/nonce management.

So don't use `openssl enc` to encrypt data.

`openssl cms` that is recommended above is S/MIME. Don't use S/MIME.

I can't wait for Filippo Valsorda's `age` to be done so I would have an answer to the question of "what should I use to encrypt a file?".

`enc` doesn't support and will never support _any_ authenticated ciphers. Consider it a red flag when you see it in the future.


To start with, none of that encryption is authenticated.

So if I understand you correctly (Noob here), Alice would need to sign the pair (key.enc, file.enc) to authenticate that those files originated from her.

Without that, Bob could potentially receive any pair of (key,file), which would just decrypt into garbage data.

BTW, variations on that sequence appear all over the internet when searching for "openssl encrypt file with public key"...

This is one of the problems with cryptography: with a little knowledge, you can end up making yourself completely insecure while believing yourself to be very secure.

People generally imagine that "encrypt this block of data" is a simple primitive that does everything you want it to. But naive encryption doesn't work like that. In the worst case, where you use ECB for the block cipher [1], you end up with the ECB penguin: https://blog.filippo.io/the-ecb-penguin/. Your secure crypto becomes a pretty trivial Caesar cipher, just on a larger alphabet. Other modes (such as the CBC mode you used) aren't so bad, but if you have some hint of the structure of the underlying data, you can start perturbing the ciphertext to manipulate the encrypted data.

The modern solution to that problem is "authenticated encryption," which means that you add in an additional guarantee that the ciphertext hasn't been tampered with. Even then, there is still room for doing things incorrectly (padding is a real pain!).

[1] This is so bad it shouldn't ever be an option in any tool.

[1] This is so bad it shouldn't ever be an option in any tool.

And yet it's effectively the default.

No, the problem is that the AES encryption step uses bare AES-CBC. An attacker can flip bits in the ciphertext to make targeted changes to the plaintext. What you want is an authenticated encryption mode, but I don't know that the OpenSSL cli supports any of them.

I've followed and enjoyed your commentary on PGP and cryptography in general, so I thought I'd post it.

Any idea when Fillipo's `age` will be done, or how to follow its development, other than the Google doc?

Filippo should get to work! The design part of age is the hard part; the actual programming is, I think? maybe one of the easier problems in cryptography (encrypt a single file with modern primitives).

I am a little bit giving Filippo shit here but one concern I have about talking "age" up is that I'm at the same time talking the problem of encrypting a file up more than it needs to be, so that people have the impression we'd have to wait, like, 5 years to finally see something do what operating systems should have been doing themselves all this time.

What are your thoughts on Keybase as a secure Slack replacement?

Personally, I love Keybase and it is my #1 choice for communicating with a few people, but bugs are far too frequent for me to consider it for a business.

It's getting better, but not close being business ready imo.

I've been using Wire on iOS, web, desktop (electron), and Android, and Keybase on Android and desktop (cli). Both are not great but Keybase is definitely the more buggy. Wire on Android, on the other hand, is also quite unusable due to its battery drainage. And both are pretty much unusable on desktop: Keybase is a CLI (not even a TUI) and Wire is Electron. I'd prefer a TUI over Electron but it's not even a TUI, so I guess Wire wins this round. Keybase also doesn't have a web client, which is why I have experience with the command line client. I think that says enough in and of itself.

What I'm trying to say is, definitely also try Wire as it's similar but slightly better. I also haven't figured out how to verify someone over Keybase, so it's basically unauthenticated or opportunistic encryption. By comparison, Wire is considered secure enough by my company after doing a pentest on it, which says quite something (most customer's stuff we would run from if we were thinking of using it), and we use it as our main communication platform within the company.

> Keybase is a CLI

Keybase has a relatively functional electron app on Mac and PC that I've used, and presumably also Linux.

> I also haven't figured out how to verify someone over Keybase

Isn't the goal with Keybase to see "this person is accountiknow on Github" and do verification that way?

> Keybase has a relatively functional electron app

Oh, then I am mistaken, thank you for correcting me and sorry for talking nonsense.

> Isn't the goal with Keybase to see "this person is accountiknow on Github" and do verification that way?

Yes, but that doesn't help me because my chat account is not cryptographically tied to my Keybase account.

So let's say I sign this statement that I am lucb1e on Github as well as on HN as well as on Keybase. You know and can verify that I have all these identities nicely tied to my PGP key. The person who controls those identities clearly also controls the PGP key. Now comes chat. You start a chat with me, and... magic? I don't know, but there is no verification code that I can use to check that I am really talking to lucb1e on Keybase; nothing that ties it to a PGP key; nothing that ties it to the HN user. My device might be encrypting the data for an entirely different key (whose private part is known either by Keybase or some other (W)MitM ((Wo)Man in the Middle)) and there is no way to check as far as I have been able to find.

Keybase isn't centered around a PGP key, each account is like a web of devices and cross-verifications of different proofs. All of this is stored and verified locally. When you open a chat with "rakoo on keybase" you can be sure that all messages are encrypted to me because the keybase client will have used the equivalent of my public key. The same process allows us to exchange files, or share a git repository hosted on keybase. There is no specific "chat" account, it's just part of the account.

> you can be sure that all messages are encrypted to me because the keybase client will have used the equivalent of my public key

But it's not end to end encryption if you can't verify it. Heck, even Whatsapp can do this. Let me cite Wikipedia on end to end encryption:

> Because no third parties can decipher the data being communicated or stored, for example, companies that use end-to-end encryption are unable to hand over texts of their customers' messages to the authorities.

So if Keybase got a court order to intercept your messages, they totally can. Just because Keybase sends you an encryption key, doesn't mean you got the one of your conversational partner: you'd have to do funky stuff (read and interpret the app's memory, or decompile and patch it to show fingerprints) to be able to verify that. That's unauthenticated encryption. It's like clicking past an https warning without ever seeing the warning.

That's not how keybase works. Keybase doesn't send you encryption keys you blindly trust. It only sends you a bunch of data structures allowing you, the sender, to verify that your correspondent really is who they claim they are. All verification of the proofs is done on the client. That means that your client is going to fetch all the sites allegedly linked to that account and do a verification locally. Because of this, if keybase ever receives a court order, there must also be a court order for Twitter, GitHub, Mastodon, all the websites that are linked to this account.

This verification happens everytime you receive new information about your peer; if something has changed it must be new, otherwise a rollback or a change in history is detected. Once your client has made this automatic check you can get a list of the linked accounts and manually check if they are the correct ones; at this point your client can take a snapshot to prove even more strongly that it's the correct one. Those snapshots are shared, so if an account has multiple followers (the name of someone whl manually verified na account) it's that much harder to crack.

If that's not enough, any change to any account is stored in a Merkle tree whose root is thus monotonically incrementing, and tht tree can be retrieved at any time to verify nothing happens. And that root is stored in the Bitcoin blockchain so that any fork is easily traceable. You really have to go out of your way to distribute a compromised key to a client. In the meantime the encryption is end-to-end.

Here's an article about how they protect against malicious attacks against the server: https://keybase.io/docs/server_security

Here's how they store that Merkle root in the Bitcoin blockchain: https://keybase.io/docs/server_security/merkle_root_in_bitco...

Oh gotcha - the lack of visibility into the verification codes could be an issue, but IMO it's like signal or any other encrypted chat app - the complexity is hidden from the user, but the identities are still verified. All of the client code is open source and so you could dive in to see how verification is handled, but why should you need to? And what code could the app show you that a malicious app running a MITM attack couldn't?

> what code could the app show you that a malicious app running a MITM attack couldn't?

I trust the app itself, since you can indeed verify that code, but the idea is that you don't have to trust the (Keybase-managed) server.

So what you'd verify is the encryption key. If we do a Diffie-Hellman key exchange and our shared key is abcxyz, then both phones should show that key. If an attack is going on, the key would have to be one known to the attacker rather than your conversational partner.

Simplified, DH is quite easy: you pick a base number and a modulus (public knowledge), both parties generate a random number (let's call it Rand), they do base^Rand or Rand^base (I forgot which way around), apply the modulus, send the number to the other person, and apply the same exponentiation with the number you received (and apply the modulus). The resulting number is only known to both parties, even though anyone could have observed the public parameters and the numbers that were sent to the other side. If a person wants to intercept this, they need to pick a Rand themselves and do the operations with that, replacing the number that gets sent over with their own. Because they can't know the Rand of either legitimate party, they will necessarily have a different result, and so the resulting encryption key is different. Both parties would (upon verifying their key out of band, for example by holding their phones next to each other in real life) see different encryption keys.

It's not about verifying the source code, but verifying that the server which I'm talking to is not malicious (for example if someone compromised it). That's the one property which makes end to end encryption "end to end" :)

Similar schemes can be made with different types of encryption (Diffie-Hellman is one of a few good methods), but with end to end encryption, the end devices are always the ones that verify each other.

This is great, thanks for writing it!

Brings to mind the words of renowned Victorian lifehacker Jerome K. Jerome:

“I can't sit still and see another man slaving and working. I want to get up and superintend, and walk round with my hands in my pockets, and tell him what to do. It is my energetic nature. I can't help it.”

The problem with the alternatives is they are product specific and baked into that product. I need a tool that is a separate layer that I can pipe into whatever product I want, be it files, email, chat, etc. Managing one set of identities is hard enough thank you very much and I also want to be able to switch the communication medium as needed.

I use gnupg a lot and I'm certainly not very happy with it but I guess it's the same as with democracy: the worst system except for the all the others.

The problem with this is that a tool that is too generic is itself dangerous, because it creates cross protocol attacks and confusion attacks like in https://efail.de for PGP email.

I think that a better approach is to bind identities from multiple purpose built cryptographic protocols.

If you blame gpg for efail you can blame anything really for a virus on one of the endpoints of any form of encryption.

Next you will hold rensponsible tech for social engineering and mandate users should not know their own secrets because that causes vulnerabilities in protocols :p

By that logic we shouldn't recommend cars with high safety ratings, because we can always train users to drive motorized unicycles at 200 MPH. Clearly there's no fault with the unicycle regardless of how many people crash, it behaved as specified.

Except the specification is trash.

Except cars with 5 star rating exist but a better solution than PGP doesn't.

FYI, Robert J. Hansen's comment in gnupg-users@gnupg.org.[0]

0) https://lists.gnupg.org/pipermail/gnupg-users/2019-July/0623...

I don't understand how these represent "mistakes", let alone "serious mistakes". But I'm glad he liked it.

It's over my head, but I thought that a reference would be useful. Given that the mail list thread references HN.

Oh, no, sorry, I'm glad you posted this; I wouldn't have known where to look. Thank you!

So the recommendation here is just to use Chat clients to communicate and forget about Email? Well that is hardly a good solution.

Yes. Email is designed to be stored forever, and all attempts to do otherwise fail due to the design of email clients. Chat is possible to designed to self-destruct. Anyone can screenshot either, so there’s no use worrying about that.

Yes! E-mail is fundamentally terribly positioned to do secure messaging. You can use e-mail, or you can have cryptography that works and have people use it, but you can't do both.

> E-mail is fundamentally terribly positioned to do secure messaging

E-mail is fundamentally a way to send a sequence of bytes somewhere (untrusted) so they can be picked up later by someone (trusted).

That’s also literally what Signal is built on so I think you’re overstating the difference.

Secure messaging is much more complex, but here’s a simple example of how that’s not true: TCP is bidirectional and email is one message, fire and forget. That immediately affects your ability to have forward and backward secrecy.

I do not think the OSI model is very useful but you seem to, so let me put it this way: E-mail is bidirectional too just at layer 4 instead of layer 3 (I hope I remembered my layers right!)

E-mail is store-and-forward just like TCP is; how do you think an IP router works? TCP is fully duplex; a tx doesn’t wait behind an rx, exactly like an e-mail reply not waiting behind an e-mail receive. The only difference is that a router will typically use volatile memory to store messages before they are sent but e-mail will typically use disk.

If your security model relies on this difference then your security model is broken. It’s worth noting that Signal does NOT rely on this difference. It relies on participants being mostly online to permit frequent rekeys and not having to retain old keys indefinitely.

No, email is not bidirectional. You send an email, the recipient later opens it. Sure, the recipient's SMTP server might respond right away with an ephemeral key you can use to enjoy forward secrecy, but that server has to store the message for the recipient to retrieve later.

You can't have full forward secrecy with email as it is used today. If you want forward secrecy with email, you need three emails sent in rapid succession: Alice sends a request to Bob, Bob sends a response to accept the request, and Alice sends the actual encrypted email. That would work. But you basically need Bob to be online.

> you need three emails sent in rapid succession

This is partially correct, but they do not need to be in rapid succession, and therefore Bob does not need to be online.

Alice's ephemeral private key must be kept as long as the whole handshake. Bob's is a bit shorter (between the last two messages).

If the messages are slow to come, those ephemeral keys become less and less ephemeral, and could actually be stolen.

That is exactly correct. As I'm sure you know, it is Alice that retains her DH key not the email server or anyone else. As I said:

> If your security model relies on this difference then your security model is broken. It’s worth noting that Signal does NOT rely on this difference. It relies on participants being mostly online to permit frequent rekeys and not having to retain old keys indefinitely.

Signal does not depend on TCP being "bidirectional" as lvh said, it depends on participants being mostly online. This has nothing to do with the transport properties of e-mail vs. TCP.

“Bidirectional”. So peers can mostly talk to each other. Do you really want to die on that particular semantic hill? “These Ethernet frames have source and destination addresses eventually”?

> Do you really want to die on that particular semantic hill?

Sure. The world of cryptography software is already muddled by misinformation, poor practices and misguided appeals to authority. We shouldn't need to spread misinformation about technologies such as e-mail to get people to stop using it.

> Long term keys are almost never what you want. If you keep using a key, it eventually gets exposed. You want the blast radius of a compromise to be as small as possible, and, just as importantly, you don’t want users to hesitate even for a moment at the thought of rolling a new key if there’s any concern at all about the safety of their current key.

Interestingly some protocols such as roughtime use the same tactic as OpenPGP: one long-term identity key that can be kept offline and rotation of online (short-term) keys signed by the long-term key. Details here: https://roughtime.googlesource.com/roughtime/+/HEAD/PROTOCOL...

If you squint enough, that’s how CAs work too.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact