Hacker News new | past | comments | ask | show | jobs | submit login

I'd like to gingerly suggest that this is not the way a project that has deliberately set as its adversaries hostile world governments should respond to a trivial, predictable† vandalism attack.

Rather, if they're serious about what they're doing – Hansen, in a related document, talks about the "good advice" he gave to dissidents in Venezuela about using GnuPG – they should thank whoever did this. This attack apparently only hit the accounts of two maintainers – that is, two people who are, or should be, exquisitely capable of distilling signal from the attack, and making sensible decisions to mitigate it going forward.

A serious attacker, on the scale of the adversaries this project has, again, deliberately selected for itself, wouldn't waste the vulnerability this way. They'd wait for the most opportune time and apply the attack broadly to accomplish their own state-level goals.

This isn't the first time the GnuPG ecosystem has responded this way to attacks. They similarly (and dishonestly) attacked the Efail researchers, and in the same document I referred to above, Hansen attacked EFF and Micah Lee for publishing exploit code; "Academic freedom should not be construed as permission to publish attack tools against a critical service with known vulnerabilities". This is what you'd expect from one of the vendors posting about a tempfile race condition on the CORE clique list in 1992; it's preposterously out of step with how the field handles vulnerability research today.

If you're relying on GnuPG for anything serious, you should be alarmed at the way they react to security setbacks.

Indeed, repeatedly predict-ed

Can confirm, I've reported a similar attack [1], along with a few other vulnerabilities, and also published exploit tools. I ended up getting legal threats from two people that I see frequently posting to sks-devel@ mailing list.

Additionally, Robert (GnuPG maintainer who wrote this Gist) has attacked [2] another person who wrote a proof-of-concept filesystem on top of SKS that was intended to highlight how broken the design is.

I have not seen a single open source community that would treat full disclosure with such contempt.

At this point SKS network continues to run exclusively on community goodwill. This attack seems to be specifically targeted on GnuPG maintainers, if attacker were to deliberately try to break SKS, they would target someone like Linus Torvalds.

Alternatively, there are other published vulnerabilities with exploits that allow to take the whole SKS network down within half an hour, which were published more than a year ago. And yet, those have not been used, so far.

[1]: https://bitbucket.org/skskeyserver/sks-keyserver/issues/57

[2]: https://twitter.com/robertjhansen/status/1017863443356020738

> if attacker were to deliberately try to break SKS, they would target someone like Linus Torvalds.

I wish they did, as I am hoping for an outcome similar to bitkeeper/git.

> I have not seen a single open source community that would treat full disclosure with such contempt.

So what are the acceptable limits of this "full disclosure"?

I should have said "any disclosure": EFail was coordinated (6 months notice [1]) and yet GnuPG officially downplayed the risk [2], launched #effail counter-campaign and blamed researchers for bad disclosure [3].

With regards to any of the existing SKS exploits specifically: even if any of them were to undergo coordinated disclosure, it wouldn't have helped: trollwot has been available for 5 years, both keyserver-fs and sks-exploit -- for more than a year. Embargoes don't last that long. All three tools still work.

What GnuPG Project effectively tries to do is to stop people from writing about any security problems period, especially those that are hard to fix.

[1]: https://gist.github.com/tqbf/2ef6bce7d16e9d3e76d790fd99c9618...

[2]: https://twitter.com/gnupg/status/995936684213723136

[3]: https://twitter.com/gnupg/status/996856990818283521

OK, makes sense. And damn, 10 years is >>> a year.

So then, as a mere user, I gotta ask how so much of the Linux ecosystem -- and indeed, so much of the open-source ecosystem -- came to depend on such a fragile thing as the SKS keyserver network. That's kinda mind-blowing.

> This isn't the first time the GnuPG ecosystem has responded this way to attacks.

Hmmmm, I think this is a bit of squeaky wheel situation going on. Remember that the sks keyserver pool is mostly a decentralized group of volunteers running a server as a hobby. So you can have all types of people operating keyservers in the pool.

For instance, I've been running a keyserver in the pool for several years. However, I don't blame the attackers like you describe. In fact, I'm openly asking around for a mentor to build a keyserver implementation that can better deal with these kinds of flooding situations.

Anyway, even though I can totally understand why operators get mad and lash out at people trying to take down the service they are running as a hobby to try to help activists communicate securely, I want to stress that that reaction isn't representative of the many of us in the pool.

The fact that it is simultaneously a "hobby" and an "attempt to help activists communicate securely" is emblematic of the whole problem here.

Either way, the time for Hansen to have warned people about the keyservers was when he first became aware of the vulnerability ("well over a decade" ago), not right after it got exploited on him personally. Everything about this response, from the personal offense he's taken to the lashing out he's done against vulnerability research to the apparent decade-long delay in notification, is unserious and unworthy of a project that purports to protect dissidents against governments.

> The fact that it is simultaneously a "hobby" and an "attempt to help activists communicate securely" is emblematic of the whole problem here.

Isn't that the way it usually gets done for most non-profit altruistic efforts, though? If I'm a church and run a soup kitchen for the homeless, the volunteers who come in an prepare meals and serve patrons are probably not going to be trained professional chefs. They are going to be people who just want to help and are volunteering as a hobby to try to do some good.

I'm sure soup kitchens deal with this kind of situation all the time, where you have a volunteer complain about this or that, and then an outsider say that soup kitchen is a shit show. That doesn't mean soup kitchens shouldn't exist. It's just the drama you have to deal with when running a soup kitchen.

Soup kitchens rarely position themselves as being secure against CIA poisoning attacks.

Can you please explain a bit more about these CIA poisoning attacks? As far as I know, the vulnerability here is just flooding keys with spam signatures so much that the public keys crash sks keyservers and gpg when downloaded. That seems like just a basic DoS attack. Where is the CIA poisoning?

Right upthread, from the very same author of the gist:


This is the difference between a soup kitchen and a neurosurgery clinic.

I believe an apt analogy might be "the lack of a neurosurgery clinic is not a reason to avoid building a health clinic."

Health clinics rarely trumpet themselves as solutions to brain injuries they clearly aren’t capable of working on.

And yet, if I am suffering from a brain injury and no one in the last 30 years has seen fit to build anything other than a health clinic in my town, I'm probably pretty happy there's a nurse practitioner available.

Let's not make the perfect the enemy of the good.

And with all respect to the professionals in the field, casting operational stones at a technically valid solution seems... myopic.

The professionals are trying very hard to tell you it’s not a technically valid solution. The math on public key encryption is not the issue, it’s operationalizing it. Openpgp is a disaster there.

Note I’m not a professional in this field but I occasionally drink with them.

In what way is this not a solvable problem?

From the article, the only issues seem to be (1) poor SDLC practices leading to toxic, frozen code, (2) the difficulty in performing protocol / standard upgrades on a decentralized network.

I'd hate to see GPG/PGP used among journalists and their sources. suggesting this is as a good way to securely communicate is negligent at best.

gpg is good, but the infrastructure (keyservers) and tooling (S/MIME / enigmail etc) around GPG are a nightmare. Bootstrapping trust and managing the lifecycle of trust is an unsolved problem, and PGP/GPG has some of the worst assumptions for users (imvho e.g. as long as users are expected to understand threat models and manage these things it's really hard).

I'm certainly no crypto expert. And, sad to admit, I hadn't even heard of trollwot until today. Or keyserver-fs or sks-exploit. I have read about risks of key collision, but had the impression that faked keys wouldn't actually work.

So yes, I get the argument that Hansen should have warned people. But I gotta wonder who else has been aware of this vulnerability for years.

And I wonder how bad this could get. I can purge requests to SKS keyservers from my machines, but what about all the upstream impacts? As I understand it, GnuPG authentication is pervasive. And "ask SKS" may be almost as pervasive.

I use GPG quite a bit. I sign my git commits with it, occasionally use it to securely transfer files with people, and appreciate to have everything coupled with my Yubikey.

What are reasonable alternatives to this right now? If I’m not using the keyservers, it’s not that bad, right?

This is my reaction as well.

This seems bad, but... what should I do? What's the alternative?

I've seen multiple people say that PGP in general is kind of bad and it would be easy for the tech industry to write a secure alternative if it really wanted to. Cool, but that's not useful right now to ordinary people like me who aren't crypto experts who are trying to decide how we should sign/encrypt messages.

I have no idea what I would use as an alternative to PGP.

To send messages, use a secure messenger, like Signal or Wire. Don't use PGP.


I have seen various endorsements of Signal from you, Bruce Schneier, Edward Snowden and so on.

I am honestly curious about how this aligns with the fact that Signal

* has no tests [1],

* has no CI [2].

How can the security of a software like Signal be asserted so thoroughly when on the engineering side, basic best practices are not followed and there is no automation that ensures that the important code paths work as expected?

Many Signal features like voice calls, video calls, reliable message delivery, or running-wihtout-crash, break regularly in daily use and with new updates. They have bugs.

What gives us (or you) confidence that the safety-critical aspects of Signal are magically exempt from such frequent bugs?

This is a serious question that concerns me.


(8-years Signal user with upstreamed patches.)

[1] There is a "test" directory, but it is negligible: 900 lines of actual test code in Signal-Android, vs >100k lines Java app source code.

[2] At least I could not find any; tests on `master` did not even compile; see https://github.com/signalapp/Signal-Android/issues/7458#issu...

How do you use Signal without a phone number?

I mean:

> Requirements

> Signal uses your existing phone number.

> The number must be able to receive an SMS or phone call.


You can't use one of those shared SMS services. So what, lease a SIM from some SIM farm in wherever, and hope that they're honest?

No privacy-conscious system would require phone numbers.

Okay, and to sign commits or emails?

To encrypt files?

To sign or encrypt emails you could use S/MIME. It is much more widely supported than PGP for signing and encrypting emails as well.

S/MIME is even worse than PGP. Don't use it.

Don't encrypt and sign emails.

This is not a realistic solution.

SMS-sized messages are the least hard part of this for me. What I want is to be able to point at a file or folder on my computer and say, "sign that with a public key so I can prove I wrote it" or "encrypt that with someone else's public key so only they can read it". At that point, I don't necessary care all that much about how the file gets sent over the network.

It doesn't need to be integrated into email, but it does need to be a low-level enough operation that I can use it on an arbitrary block of text, file, or folder of any size. Is there a replacement that does that?

I use Signal, and it's great. But Signal is not a replacement for PGP, it's a replacement for one, very specific use-case for PGP.

You asked (among other things) how to send messages securely without PGP. Don't use PGP to send secure messages; use a secure messenger, like Signal or Wire.

Someone else asked how to email securely without PGP. Email isn't secure with PGP. Don't use PGP to send encrypt emails, and don't use email to send secure messages; use a secure messenger, like Signal or Wire.

I acknowledge there are use cases not well covered by secure messengers. The current state of file encryption, which is practically the "hello world" of encryption problems, is a travesty. If you're simply looking to sign something, and later verify that it was you who signed it, use minisign. But that's a very narrow use case.

To be clear, I'm not saying that anyone who criticizes PGP needs an easy solution. I agree with all the stuff I'm seeing in this thread, and I get that the current answer might be, "well, the ecosystem is kind of bad right now." But what I'm getting at is that it's one thing to understand that the ecosystem is bad, but that on its own is not information I (or most people) are equipped to act on.

Minisign could solve some of that, but going back to the point that I don't trust myself to audit cryptography software, Minisign also appears to be a one person project, and I can't find very many people online talking about it, using it, or looking for vulnerabilities. It's not that I don't trust you, I see you on HN a lot, but I'd feel more comfortable with Minisign if I could find more security people recommending it.

I can drop PGP for anything where I find a different tool that supports that specific use-case that's trustworthy. I'm not thrilled about that, because part of my security process is trying to make it hard for me to make mistakes as a user, and multiple tools hurt that effort. But I can deal.

BUT, I can't just stop encrypting files. I can start using a lot of tiny, individual tools for some of my use-cases, but occasionally, I'm going to be in a situation where I need to do the "hello world" stuff.

To kind of rephrase what I'm asking, regardless of whether or not PGP is good, is it currently the best solution for handling public/private key encryption in the general use case (particularly if I'm not personally using SKS for anything)? Because I can't just decide not to encrypt files any more; even if the current solution is bad I still need to use something. The Minisign main dev is also recommending Encpipe, which could solve some of my use cases, but doesn't support public keys and, again, looks like it's a hobby project that practically nobody in the security world is talking about or auditing. I guess age[0] also looks promising?

In theory, Age and Minisign could meet the majority of my hard requirements by themselves if I could verify that they're trustworthy. But realizing that PGP has been run essentially as a hobby project, it feels a little weird to move to another piece of software with only one serious maintainer.

[0]: https://docs.google.com/document/d/11yHom20CrsuX8KQJXBBw04s8...

minisign is a few hundred lines of code most of which are setup for calls into libsodium.

If you need to encrypt files (symmetrically) then use a tool that does this well like Veracrypt.

> The current state of file encryption, which is practically the "hello world" of encryption problems, is a travesty.

Do you know of any projects that are aiming to solve this? It feels like all that is needed is a halfway decent standard file format, and some tools to bootstrap it. That seems to be how we got TLS and SSH, which are the two successes of encryption.

I suppose those two protocols have the advantage of interactive negotiation. Whereas software encrypting a file does not get to negotiate any parameters with the software that will later be decrypting it.

Moreover, those protocols had some decent weight behind it. Secure data transport is a problem that matters to essentially everyone these days. Whereas secure and portable file encryption really does not. E-mail is a significant use-case, but only tangentially, and optimal E-mail solutions are not really optimal portable file solutions.

I am not disagreeing with any particular technical point here, but I am struck reading the thread that there are two sides of a debate - one that is "Gosh GPG has had a fundamental problem for a decade and its terrible we as the internet have not solved it and now the bill comes due" and the other side is "Where the hell did that come from"

I am on the WTF side - GPG is synonymous in my mind with Public / Private keypairs and now this needs re-evaluating.

Tl;dr I need more context before I can understand the blast radius before I can evaluate the solutions

Some of us have a lot of running to do to catch up

[matrix] and Riot.im are alternatives to Signal and Wire.

Unfortunately these have strong privacy implications as they all phone home an awful lot.

Do you have more info about this? As far as I understand if you use e2e encryption with riot/matrix you should be quite safe.

I imagine the GP is talking about https://news.ycombinator.com/item?id=20178267. We've spent the last few weeks going through fixing the issues which this highlighted; there'll be a blog post later today (or tomorrow) giving an update on how we've addressed the points in question.

Yeah I wrote this before I saw that blog post which appears to have hit most of the implications.

sorry no. riot and matrix are both NOT e2e. I wish people would stop repeating as facts what is somewhere on the roadmap

Without more context, saying that matrix is not e2e encrypted is just as much of a lie as saying it is.

Riot and Matrix have had E2EE since 2016. You have to manually enable it when you want it, but that's shortly going to change as per https://news.ycombinator.com/item?id=20315096.

I haven't used it for this, but I think Keybase might be a good system for filling the role of keyservers. If you just want encrypted messaging, it has it. But you can also store your PGP key in there and people can trust it if they trust Keybase's other proofs (which are not necessarily a leap of faith, you can go to the source and verify them).

Proprietary messengers are never secure.

Both Signal and Wire are FOSS though.

its just a means of public key visibility and a nice service with automated importing and searchable. i have known some to give their keys on their site or service page though. remember you can upload just about any info on keys servers similar emails name; the key string is "key" though. this poisoning seems to be for the key server itself so you may not get the server you are looking for.

A vulnerability in mission-critical software has been known for years, and they're mad that finally someone got fed up enough to publicly draw attention to it in a way that couldn't be ignored or dismissed by the maintainers?

I don't think it's the EFF putting activists at risk here.

I think everyone on this sub thread is on the same page about this.

My impression from the article was less that they'd personally made the decision hostile world governments were their adversary, and more that they'd ended up looking after some poorly-understood software none of them really knew how to change that made that decision 20 years ago.

See the document linked alongside your comment.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact