Rather, if they're serious about what they're doing – Hansen, in a related document, talks about the "good advice" he gave to dissidents in Venezuela about using GnuPG – they should thank whoever did this. This attack apparently only hit the accounts of two maintainers – that is, two people who are, or should be, exquisitely capable of distilling signal from the attack, and making sensible decisions to mitigate it going forward.
A serious attacker, on the scale of the adversaries this project has, again, deliberately selected for itself, wouldn't waste the vulnerability this way. They'd wait for the most opportune time and apply the attack broadly to accomplish their own state-level goals.
This isn't the first time the GnuPG ecosystem has responded this way to attacks. They similarly (and dishonestly) attacked the Efail researchers, and in the same document I referred to above, Hansen attacked EFF and Micah Lee for publishing exploit code; "Academic freedom should not be construed as permission to publish attack tools against a critical service with known vulnerabilities". This is what you'd expect from one of the vendors posting about a tempfile race condition on the CORE clique list in 1992; it's preposterously out of step with how the field handles vulnerability research today.
If you're relying on GnuPG for anything serious, you should be alarmed at the way they react to security setbacks.
† Indeed, repeatedly predict-ed
Additionally, Robert (GnuPG maintainer who wrote this Gist) has attacked  another person who wrote a proof-of-concept filesystem on top of SKS that was intended to highlight how broken the design is.
I have not seen a single open source community that would treat full disclosure with such contempt.
At this point SKS network continues to run exclusively on community goodwill. This attack seems to be specifically targeted on GnuPG maintainers, if attacker were to deliberately try to break SKS, they would target someone like Linus Torvalds.
Alternatively, there are other published vulnerabilities with exploits that allow to take the whole SKS network down within half an hour, which were published more than a year ago. And yet, those have not been used, so far.
I wish they did, as I am hoping for an outcome similar to bitkeeper/git.
So what are the acceptable limits of this "full disclosure"?
With regards to any of the existing SKS exploits specifically: even if any of them were to undergo coordinated disclosure, it wouldn't have helped: trollwot has been available for 5 years, both keyserver-fs and sks-exploit -- for more than a year. Embargoes don't last that long. All three tools still work.
What GnuPG Project effectively tries to do is to stop people from writing about any security problems period, especially those that are hard to fix.
So then, as a mere user, I gotta ask how so much of the Linux ecosystem -- and indeed, so much of the open-source ecosystem -- came to depend on such a fragile thing as the SKS keyserver network. That's kinda mind-blowing.
Hmmmm, I think this is a bit of squeaky wheel situation going on. Remember that the sks keyserver pool is mostly a decentralized group of volunteers running a server as a hobby. So you can have all types of people operating keyservers in the pool.
For instance, I've been running a keyserver in the pool for several years. However, I don't blame the attackers like you describe. In fact, I'm openly asking around for a mentor to build a keyserver implementation that can better deal with these kinds of flooding situations.
Anyway, even though I can totally understand why operators get mad and lash out at people trying to take down the service they are running as a hobby to try to help activists communicate securely, I want to stress that that reaction isn't representative of the many of us in the pool.
Either way, the time for Hansen to have warned people about the keyservers was when he first became aware of the vulnerability ("well over a decade" ago), not right after it got exploited on him personally. Everything about this response, from the personal offense he's taken to the lashing out he's done against vulnerability research to the apparent decade-long delay in notification, is unserious and unworthy of a project that purports to protect dissidents against governments.
Isn't that the way it usually gets done for most non-profit altruistic efforts, though? If I'm a church and run a soup kitchen for the homeless, the volunteers who come in an prepare meals and serve patrons are probably not going to be trained professional chefs. They are going to be people who just want to help and are volunteering as a hobby to try to do some good.
I'm sure soup kitchens deal with this kind of situation all the time, where you have a volunteer complain about this or that, and then an outsider say that soup kitchen is a shit show. That doesn't mean soup kitchens shouldn't exist. It's just the drama you have to deal with when running a soup kitchen.
Let's not make the perfect the enemy of the good.
And with all respect to the professionals in the field, casting operational stones at a technically valid solution seems... myopic.
Note I’m not a professional in this field but I occasionally drink with them.
From the article, the only issues seem to be (1) poor SDLC practices leading to toxic, frozen code, (2) the difficulty in performing protocol / standard upgrades on a decentralized network.
gpg is good, but the infrastructure (keyservers) and tooling (S/MIME / enigmail etc) around GPG are a nightmare. Bootstrapping trust and managing the lifecycle of trust is an unsolved problem, and PGP/GPG has some of the worst assumptions for users (imvho e.g. as long as users are expected to understand threat models and manage these things it's really hard).
So yes, I get the argument that Hansen should have warned people. But I gotta wonder who else has been aware of this vulnerability for years.
And I wonder how bad this could get. I can purge requests to SKS keyservers from my machines, but what about all the upstream impacts? As I understand it, GnuPG authentication is pervasive. And "ask SKS" may be almost as pervasive.
What are reasonable alternatives to this right now? If I’m not using the keyservers, it’s not that bad, right?
This seems bad, but... what should I do? What's the alternative?
I've seen multiple people say that PGP in general is kind of bad and it would be easy for the tech industry to write a secure alternative if it really wanted to. Cool, but that's not useful right now to ordinary people like me who aren't crypto experts who are trying to decide how we should sign/encrypt messages.
I have no idea what I would use as an alternative to PGP.
I have seen various endorsements of Signal from you, Bruce Schneier, Edward Snowden and so on.
I am honestly curious about how this aligns with the fact that Signal
* has no tests ,
* has no CI .
How can the security of a software like Signal be asserted so thoroughly when on the engineering side, basic best practices are not followed and there is no automation that ensures that the important code paths work as expected?
Many Signal features like voice calls, video calls, reliable message delivery, or running-wihtout-crash, break regularly in daily use and with new updates. They have bugs.
What gives us (or you) confidence that the safety-critical aspects of Signal are magically exempt from such frequent bugs?
This is a serious question that concerns me.
(8-years Signal user with upstreamed patches.)
 There is a "test" directory, but it is negligible: 900 lines of actual test code in Signal-Android, vs >100k lines Java app source code.
 At least I could not find any; tests on `master` did not even compile; see https://github.com/signalapp/Signal-Android/issues/7458#issu...
> Signal uses your existing phone number.
> The number must be able to receive an SMS or phone call.
You can't use one of those shared SMS services. So what, lease a SIM from some SIM farm in wherever, and hope that they're honest?
No privacy-conscious system would require phone numbers.
To encrypt files?
SMS-sized messages are the least hard part of this for me. What I want is to be able to point at a file or folder on my computer and say, "sign that with a public key so I can prove I wrote it" or "encrypt that with someone else's public key so only they can read it". At that point, I don't necessary care all that much about how the file gets sent over the network.
It doesn't need to be integrated into email, but it does need to be a low-level enough operation that I can use it on an arbitrary block of text, file, or folder of any size. Is there a replacement that does that?
I use Signal, and it's great. But Signal is not a replacement for PGP, it's a replacement for one, very specific use-case for PGP.
Someone else asked how to email securely without PGP. Email isn't secure with PGP. Don't use PGP to send encrypt emails, and don't use email to send secure messages; use a secure messenger, like Signal or Wire.
I acknowledge there are use cases not well covered by secure messengers. The current state of file encryption, which is practically the "hello world" of encryption problems, is a travesty. If you're simply looking to sign something, and later verify that it was you who signed it, use minisign. But that's a very narrow use case.
Minisign could solve some of that, but going back to the point that I don't trust myself to audit cryptography software, Minisign also appears to be a one person project, and I can't find very many people online talking about it, using it, or looking for vulnerabilities. It's not that I don't trust you, I see you on HN a lot, but I'd feel more comfortable with Minisign if I could find more security people recommending it.
I can drop PGP for anything where I find a different tool that supports that specific use-case that's trustworthy. I'm not thrilled about that, because part of my security process is trying to make it hard for me to make mistakes as a user, and multiple tools hurt that effort. But I can deal.
BUT, I can't just stop encrypting files. I can start using a lot of tiny, individual tools for some of my use-cases, but occasionally, I'm going to be in a situation where I need to do the "hello world" stuff.
To kind of rephrase what I'm asking, regardless of whether or not PGP is good, is it currently the best solution for handling public/private key encryption in the general use case (particularly if I'm not personally using SKS for anything)? Because I can't just decide not to encrypt files any more; even if the current solution is bad I still need to use something. The Minisign main dev is also recommending Encpipe, which could solve some of my use cases, but doesn't support public keys and, again, looks like it's a hobby project that practically nobody in the security world is talking about or auditing. I guess age also looks promising?
In theory, Age and Minisign could meet the majority of my hard requirements by themselves if I could verify that they're trustworthy. But realizing that PGP has been run essentially as a hobby project, it feels a little weird to move to another piece of software with only one serious maintainer.
Do you know of any projects that are aiming to solve this? It feels like all that is needed is a halfway decent standard file format, and some tools to bootstrap it.
That seems to be how we got TLS and SSH, which are the two successes of encryption.
I suppose those two protocols have the advantage of interactive negotiation. Whereas software encrypting a file does not get to negotiate any parameters with the software that will later be decrypting it.
Moreover, those protocols had some decent weight behind it. Secure data transport is a problem that matters to essentially everyone these days. Whereas secure and portable file encryption really does not. E-mail is a significant use-case, but only tangentially, and optimal E-mail solutions are not really optimal portable file solutions.
I am on the WTF side - GPG is synonymous in my mind with Public / Private keypairs and now this needs re-evaluating.
Tl;dr I need more context before I can understand the blast radius before I can evaluate the solutions
Some of us have a lot of running to do to catch up
I don't think it's the EFF putting activists at risk here.