Hacker News new | past | comments | ask | show | jobs | submit login
SKS Keyserver Network Under Attack (gist.github.com)
379 points by Spellman on June 29, 2019 | hide | past | favorite | 189 comments

I'd like to gingerly suggest that this is not the way a project that has deliberately set as its adversaries hostile world governments should respond to a trivial, predictable† vandalism attack.

Rather, if they're serious about what they're doing – Hansen, in a related document, talks about the "good advice" he gave to dissidents in Venezuela about using GnuPG – they should thank whoever did this. This attack apparently only hit the accounts of two maintainers – that is, two people who are, or should be, exquisitely capable of distilling signal from the attack, and making sensible decisions to mitigate it going forward.

A serious attacker, on the scale of the adversaries this project has, again, deliberately selected for itself, wouldn't waste the vulnerability this way. They'd wait for the most opportune time and apply the attack broadly to accomplish their own state-level goals.

This isn't the first time the GnuPG ecosystem has responded this way to attacks. They similarly (and dishonestly) attacked the Efail researchers, and in the same document I referred to above, Hansen attacked EFF and Micah Lee for publishing exploit code; "Academic freedom should not be construed as permission to publish attack tools against a critical service with known vulnerabilities". This is what you'd expect from one of the vendors posting about a tempfile race condition on the CORE clique list in 1992; it's preposterously out of step with how the field handles vulnerability research today.

If you're relying on GnuPG for anything serious, you should be alarmed at the way they react to security setbacks.

Indeed, repeatedly predict-ed

Can confirm, I've reported a similar attack [1], along with a few other vulnerabilities, and also published exploit tools. I ended up getting legal threats from two people that I see frequently posting to sks-devel@ mailing list.

Additionally, Robert (GnuPG maintainer who wrote this Gist) has attacked [2] another person who wrote a proof-of-concept filesystem on top of SKS that was intended to highlight how broken the design is.

I have not seen a single open source community that would treat full disclosure with such contempt.

At this point SKS network continues to run exclusively on community goodwill. This attack seems to be specifically targeted on GnuPG maintainers, if attacker were to deliberately try to break SKS, they would target someone like Linus Torvalds.

Alternatively, there are other published vulnerabilities with exploits that allow to take the whole SKS network down within half an hour, which were published more than a year ago. And yet, those have not been used, so far.

[1]: https://bitbucket.org/skskeyserver/sks-keyserver/issues/57

[2]: https://twitter.com/robertjhansen/status/1017863443356020738

> if attacker were to deliberately try to break SKS, they would target someone like Linus Torvalds.

I wish they did, as I am hoping for an outcome similar to bitkeeper/git.

> I have not seen a single open source community that would treat full disclosure with such contempt.

So what are the acceptable limits of this "full disclosure"?

I should have said "any disclosure": EFail was coordinated (6 months notice [1]) and yet GnuPG officially downplayed the risk [2], launched #effail counter-campaign and blamed researchers for bad disclosure [3].

With regards to any of the existing SKS exploits specifically: even if any of them were to undergo coordinated disclosure, it wouldn't have helped: trollwot has been available for 5 years, both keyserver-fs and sks-exploit -- for more than a year. Embargoes don't last that long. All three tools still work.

What GnuPG Project effectively tries to do is to stop people from writing about any security problems period, especially those that are hard to fix.

[1]: https://gist.github.com/tqbf/2ef6bce7d16e9d3e76d790fd99c9618...

[2]: https://twitter.com/gnupg/status/995936684213723136

[3]: https://twitter.com/gnupg/status/996856990818283521

OK, makes sense. And damn, 10 years is >>> a year.

So then, as a mere user, I gotta ask how so much of the Linux ecosystem -- and indeed, so much of the open-source ecosystem -- came to depend on such a fragile thing as the SKS keyserver network. That's kinda mind-blowing.

> This isn't the first time the GnuPG ecosystem has responded this way to attacks.

Hmmmm, I think this is a bit of squeaky wheel situation going on. Remember that the sks keyserver pool is mostly a decentralized group of volunteers running a server as a hobby. So you can have all types of people operating keyservers in the pool.

For instance, I've been running a keyserver in the pool for several years. However, I don't blame the attackers like you describe. In fact, I'm openly asking around for a mentor to build a keyserver implementation that can better deal with these kinds of flooding situations.

Anyway, even though I can totally understand why operators get mad and lash out at people trying to take down the service they are running as a hobby to try to help activists communicate securely, I want to stress that that reaction isn't representative of the many of us in the pool.

The fact that it is simultaneously a "hobby" and an "attempt to help activists communicate securely" is emblematic of the whole problem here.

Either way, the time for Hansen to have warned people about the keyservers was when he first became aware of the vulnerability ("well over a decade" ago), not right after it got exploited on him personally. Everything about this response, from the personal offense he's taken to the lashing out he's done against vulnerability research to the apparent decade-long delay in notification, is unserious and unworthy of a project that purports to protect dissidents against governments.

> The fact that it is simultaneously a "hobby" and an "attempt to help activists communicate securely" is emblematic of the whole problem here.

Isn't that the way it usually gets done for most non-profit altruistic efforts, though? If I'm a church and run a soup kitchen for the homeless, the volunteers who come in an prepare meals and serve patrons are probably not going to be trained professional chefs. They are going to be people who just want to help and are volunteering as a hobby to try to do some good.

I'm sure soup kitchens deal with this kind of situation all the time, where you have a volunteer complain about this or that, and then an outsider say that soup kitchen is a shit show. That doesn't mean soup kitchens shouldn't exist. It's just the drama you have to deal with when running a soup kitchen.

Soup kitchens rarely position themselves as being secure against CIA poisoning attacks.

Can you please explain a bit more about these CIA poisoning attacks? As far as I know, the vulnerability here is just flooding keys with spam signatures so much that the public keys crash sks keyservers and gpg when downloaded. That seems like just a basic DoS attack. Where is the CIA poisoning?

Right upthread, from the very same author of the gist:


This is the difference between a soup kitchen and a neurosurgery clinic.

I believe an apt analogy might be "the lack of a neurosurgery clinic is not a reason to avoid building a health clinic."

Health clinics rarely trumpet themselves as solutions to brain injuries they clearly aren’t capable of working on.

And yet, if I am suffering from a brain injury and no one in the last 30 years has seen fit to build anything other than a health clinic in my town, I'm probably pretty happy there's a nurse practitioner available.

Let's not make the perfect the enemy of the good.

And with all respect to the professionals in the field, casting operational stones at a technically valid solution seems... myopic.

The professionals are trying very hard to tell you it’s not a technically valid solution. The math on public key encryption is not the issue, it’s operationalizing it. Openpgp is a disaster there.

Note I’m not a professional in this field but I occasionally drink with them.

In what way is this not a solvable problem?

From the article, the only issues seem to be (1) poor SDLC practices leading to toxic, frozen code, (2) the difficulty in performing protocol / standard upgrades on a decentralized network.

I'd hate to see GPG/PGP used among journalists and their sources. suggesting this is as a good way to securely communicate is negligent at best.

gpg is good, but the infrastructure (keyservers) and tooling (S/MIME / enigmail etc) around GPG are a nightmare. Bootstrapping trust and managing the lifecycle of trust is an unsolved problem, and PGP/GPG has some of the worst assumptions for users (imvho e.g. as long as users are expected to understand threat models and manage these things it's really hard).

I'm certainly no crypto expert. And, sad to admit, I hadn't even heard of trollwot until today. Or keyserver-fs or sks-exploit. I have read about risks of key collision, but had the impression that faked keys wouldn't actually work.

So yes, I get the argument that Hansen should have warned people. But I gotta wonder who else has been aware of this vulnerability for years.

And I wonder how bad this could get. I can purge requests to SKS keyservers from my machines, but what about all the upstream impacts? As I understand it, GnuPG authentication is pervasive. And "ask SKS" may be almost as pervasive.

I use GPG quite a bit. I sign my git commits with it, occasionally use it to securely transfer files with people, and appreciate to have everything coupled with my Yubikey.

What are reasonable alternatives to this right now? If I’m not using the keyservers, it’s not that bad, right?

This is my reaction as well.

This seems bad, but... what should I do? What's the alternative?

I've seen multiple people say that PGP in general is kind of bad and it would be easy for the tech industry to write a secure alternative if it really wanted to. Cool, but that's not useful right now to ordinary people like me who aren't crypto experts who are trying to decide how we should sign/encrypt messages.

I have no idea what I would use as an alternative to PGP.

To send messages, use a secure messenger, like Signal or Wire. Don't use PGP.


I have seen various endorsements of Signal from you, Bruce Schneier, Edward Snowden and so on.

I am honestly curious about how this aligns with the fact that Signal

* has no tests [1],

* has no CI [2].

How can the security of a software like Signal be asserted so thoroughly when on the engineering side, basic best practices are not followed and there is no automation that ensures that the important code paths work as expected?

Many Signal features like voice calls, video calls, reliable message delivery, or running-wihtout-crash, break regularly in daily use and with new updates. They have bugs.

What gives us (or you) confidence that the safety-critical aspects of Signal are magically exempt from such frequent bugs?

This is a serious question that concerns me.


(8-years Signal user with upstreamed patches.)

[1] There is a "test" directory, but it is negligible: 900 lines of actual test code in Signal-Android, vs >100k lines Java app source code.

[2] At least I could not find any; tests on `master` did not even compile; see https://github.com/signalapp/Signal-Android/issues/7458#issu...

How do you use Signal without a phone number?

I mean:

> Requirements

> Signal uses your existing phone number.

> The number must be able to receive an SMS or phone call.


You can't use one of those shared SMS services. So what, lease a SIM from some SIM farm in wherever, and hope that they're honest?

No privacy-conscious system would require phone numbers.

Okay, and to sign commits or emails?

To encrypt files?

To sign or encrypt emails you could use S/MIME. It is much more widely supported than PGP for signing and encrypting emails as well.

S/MIME is even worse than PGP. Don't use it.

Don't encrypt and sign emails.

This is not a realistic solution.

SMS-sized messages are the least hard part of this for me. What I want is to be able to point at a file or folder on my computer and say, "sign that with a public key so I can prove I wrote it" or "encrypt that with someone else's public key so only they can read it". At that point, I don't necessary care all that much about how the file gets sent over the network.

It doesn't need to be integrated into email, but it does need to be a low-level enough operation that I can use it on an arbitrary block of text, file, or folder of any size. Is there a replacement that does that?

I use Signal, and it's great. But Signal is not a replacement for PGP, it's a replacement for one, very specific use-case for PGP.

You asked (among other things) how to send messages securely without PGP. Don't use PGP to send secure messages; use a secure messenger, like Signal or Wire.

Someone else asked how to email securely without PGP. Email isn't secure with PGP. Don't use PGP to send encrypt emails, and don't use email to send secure messages; use a secure messenger, like Signal or Wire.

I acknowledge there are use cases not well covered by secure messengers. The current state of file encryption, which is practically the "hello world" of encryption problems, is a travesty. If you're simply looking to sign something, and later verify that it was you who signed it, use minisign. But that's a very narrow use case.

To be clear, I'm not saying that anyone who criticizes PGP needs an easy solution. I agree with all the stuff I'm seeing in this thread, and I get that the current answer might be, "well, the ecosystem is kind of bad right now." But what I'm getting at is that it's one thing to understand that the ecosystem is bad, but that on its own is not information I (or most people) are equipped to act on.

Minisign could solve some of that, but going back to the point that I don't trust myself to audit cryptography software, Minisign also appears to be a one person project, and I can't find very many people online talking about it, using it, or looking for vulnerabilities. It's not that I don't trust you, I see you on HN a lot, but I'd feel more comfortable with Minisign if I could find more security people recommending it.

I can drop PGP for anything where I find a different tool that supports that specific use-case that's trustworthy. I'm not thrilled about that, because part of my security process is trying to make it hard for me to make mistakes as a user, and multiple tools hurt that effort. But I can deal.

BUT, I can't just stop encrypting files. I can start using a lot of tiny, individual tools for some of my use-cases, but occasionally, I'm going to be in a situation where I need to do the "hello world" stuff.

To kind of rephrase what I'm asking, regardless of whether or not PGP is good, is it currently the best solution for handling public/private key encryption in the general use case (particularly if I'm not personally using SKS for anything)? Because I can't just decide not to encrypt files any more; even if the current solution is bad I still need to use something. The Minisign main dev is also recommending Encpipe, which could solve some of my use cases, but doesn't support public keys and, again, looks like it's a hobby project that practically nobody in the security world is talking about or auditing. I guess age[0] also looks promising?

In theory, Age and Minisign could meet the majority of my hard requirements by themselves if I could verify that they're trustworthy. But realizing that PGP has been run essentially as a hobby project, it feels a little weird to move to another piece of software with only one serious maintainer.

[0]: https://docs.google.com/document/d/11yHom20CrsuX8KQJXBBw04s8...

minisign is a few hundred lines of code most of which are setup for calls into libsodium.

If you need to encrypt files (symmetrically) then use a tool that does this well like Veracrypt.

> The current state of file encryption, which is practically the "hello world" of encryption problems, is a travesty.

Do you know of any projects that are aiming to solve this? It feels like all that is needed is a halfway decent standard file format, and some tools to bootstrap it. That seems to be how we got TLS and SSH, which are the two successes of encryption.

I suppose those two protocols have the advantage of interactive negotiation. Whereas software encrypting a file does not get to negotiate any parameters with the software that will later be decrypting it.

Moreover, those protocols had some decent weight behind it. Secure data transport is a problem that matters to essentially everyone these days. Whereas secure and portable file encryption really does not. E-mail is a significant use-case, but only tangentially, and optimal E-mail solutions are not really optimal portable file solutions.

I am not disagreeing with any particular technical point here, but I am struck reading the thread that there are two sides of a debate - one that is "Gosh GPG has had a fundamental problem for a decade and its terrible we as the internet have not solved it and now the bill comes due" and the other side is "Where the hell did that come from"

I am on the WTF side - GPG is synonymous in my mind with Public / Private keypairs and now this needs re-evaluating.

Tl;dr I need more context before I can understand the blast radius before I can evaluate the solutions

Some of us have a lot of running to do to catch up

[matrix] and Riot.im are alternatives to Signal and Wire.

Unfortunately these have strong privacy implications as they all phone home an awful lot.

Do you have more info about this? As far as I understand if you use e2e encryption with riot/matrix you should be quite safe.

I imagine the GP is talking about https://news.ycombinator.com/item?id=20178267. We've spent the last few weeks going through fixing the issues which this highlighted; there'll be a blog post later today (or tomorrow) giving an update on how we've addressed the points in question.

Yeah I wrote this before I saw that blog post which appears to have hit most of the implications.

sorry no. riot and matrix are both NOT e2e. I wish people would stop repeating as facts what is somewhere on the roadmap

Without more context, saying that matrix is not e2e encrypted is just as much of a lie as saying it is.

Riot and Matrix have had E2EE since 2016. You have to manually enable it when you want it, but that's shortly going to change as per https://news.ycombinator.com/item?id=20315096.

I haven't used it for this, but I think Keybase might be a good system for filling the role of keyservers. If you just want encrypted messaging, it has it. But you can also store your PGP key in there and people can trust it if they trust Keybase's other proofs (which are not necessarily a leap of faith, you can go to the source and verify them).

Proprietary messengers are never secure.

Both Signal and Wire are FOSS though.

its just a means of public key visibility and a nice service with automated importing and searchable. i have known some to give their keys on their site or service page though. remember you can upload just about any info on keys servers similar emails name; the key string is "key" though. this poisoning seems to be for the key server itself so you may not get the server you are looking for.

A vulnerability in mission-critical software has been known for years, and they're mad that finally someone got fed up enough to publicly draw attention to it in a way that couldn't be ignored or dismissed by the maintainers?

I don't think it's the EFF putting activists at risk here.

I think everyone on this sub thread is on the same page about this.

My impression from the article was less that they'd personally made the decision hostile world governments were their adversary, and more that they'd ended up looking after some poorly-understood software none of them really knew how to change that made that decision 20 years ago.

See the document linked alongside your comment.

It's always sad to see someone taking down a project that is run with the best intentions. However, it may be time to move away from the entire PGP ecosystem.

Consider the post's "We've known for a decade this attack is possible. It's now here and it's devastating.".

Consider also the final section, "PGP is bad technology and it’s making a bad community", of https://blog.cryptographyengineering.com/2018/05/17/was-the-... (by noted cryptographer Matthew Green.)

My sympathies to the victims of this attack.

[EDIT: reworked slightly at 7m to try to be as kind as possible]

The idea of Internet actors (human or machine) owning cryptographic identities in a distributed system is a good one. I don't think we should stray from this approach.

From your Matthew Green link:

> If PGP went away, I estimate it would take the security community less than a year to entirely replace (the key bits of) the standard with something much better and modern. It would have modern crypto and authentication, and maybe even extensions for future post-quantum future security. It would be simple. Many bright new people would get involved to help write the inevitable Rust, Go and Javascript clients and libraries.

Those word were written a little more than a year ago. What if right now is the time to assume that PGP has gone away and start building the next thing?

What alternate projects are people excited about that are solving the problem of distributed cryptographic identity and messaging?

It's not the silver bullet we need, but I really like Keybase's approach. I think any PGP replacement should take notes from their architecture.

I love their approach, but it is still PGP based. Moreover, it is a bit too centralized.

Thing is, a non-centralized system is really hard to monetize. There might be space for some long-form (as opposed to whatsapp, etc) encrypted messaging. But a solution for portable encrypted files (using either symmetric or asymmetric crypto) is hard to monetize.

Note that, while portable encrypted files could be used for encrypted messaging, the use cases and ergonomics are sufficiently different that a good solution for one will not be a great fit for the other.

There's nothing technically wrong (baring a yet-unrevealed exploit) with PGP itself. This thread's topic was about a weakness in SKS. PGP just suffers from major UX problems, which Keybase has largely addressed.

To use Keybase, one doesn't even need to know what PGP is. It all "just works". I have successfully introduced non-technical people to Keybase and what's more, these people use it actively and appreciate what it can do for them. Can't really say that about PGP.

> Thing is, a non-centralized system is really hard to monetize

Until our government supports such infrastructure, the only solution is trust funds / non-profit organizations which released all of their R&D for free.

It's not a silver bullet, but is it not better than PGP? Is it not good enough?

It's still too centralized. You and I can't run our own compatible public Keybase servers, or our own private servers. Understandable, as their investors expect them not to give away everything for free.

The client is open source so reverse engineering and improving the server architecture is far from impossible. I think Keybase is making great strides on exploring how we can utilize asymmetric E2E encryption for communication, organization, storage, and everything in between. I think they've made tremendous progress in making E2E cryptography accessible. However we need a 100% FOSS system.

See: https://github.com/keybase/client/issues/6374

> What alternate projects are people excited about that are solving the problem of distributed cryptographic identity and messaging?

I would argue Matrix is a good contender. The Matrix project is working on secure messaging, and they have a lot of really cool solutions for key distribution and federated communication.

"Distributed cryptographic identity" is a slightly hard concept to pin down. In Matrix this is still an open problem, but that's just for the service which links third-party IDs (phone numbers, email addresses, etc) to Matrix IDs (@cyphar:cyphar.com, for instance). But if the problem is distributing keys, then that is a solvable problem.

But then again, maybe a different solution is needed to fill the PGP niche.

The best replacement to PGP would be a messaging network with opt-in, poorly supported encryption?


I don't know what it is with people and Matrix. It seems like a good project, hamstrung by its overzealous cheering section.

> opt-in

Device cross-signing (from my understanding, the last must-have feature before e2ee is considered ready to be the default) is very close to being merged now that Matrix 1.0 is out. Yes, it took several years to get there, but I think its fair to say that the e2ee design now looks much better than anything else available (and had to solve many more technical problems than [for instance] Signal, due to the needs of federation).

> poorly supported

There are many unmaintained Matrix clients (this is what the top comment of your link points out). Personally I'd prefer if they stopped advertising them on matrix.org, because all of the newer clients either do or will support e2ee.

> hamstrung by its overzealous cheering section

Given that it seems to be the only project that provides modern e2ee in a way where your data is actually controlled by you without a central authority, I'm surprised that so few people are cheering them on.

"All newer clients either do or will support e2ee" is my favorite thing I've ever heard someone say about Matrix.

well, it's true - if you run a daemon like pantalaimon, even one-liner Matrix requests via curl can speak full E2EE. So arguably the older clients now support E2EE too :)

> Given that it seems to be the only project that provides modern e2ee in a way where your data is actually controlled by you without a central authority, ...


(Matrix project lead here.) The linked reddit post is 10 months old, and even then was riddled with bugs (it ignored or declared 'unclear' for some of clients which had E2E, and included loads of random alpha ones to make the situation seem way worse than it was). The topmost reply on that post tried to correct it at the time, but seems like people don't read the replies.

The current situation is that the following clients in Matrix have full E2E support:

* Riot/Web (JS) (aka Riot/Desktop)

* Riot/iOS (ObjC)

* Riot/Android (Java)

* RiotX/Android (Kotlin)

* Weechat (Python)

* Pantalaimon (Python)

* Seaglass (ObjC on macOS)

* Nheko (Qt) (other than file transfer)

Meanwhile, Quaternion (Qt) is currently getting support via GSoC 2019, and the purple-matrix plugin has working (albeit read-only) E2E support. I believe Pattle (pattle.im) is working on E2EE too. And the matrix-python-sdk (not an app) got support via GSoC 2018.

It's true that there are over 100 other Matrix clients out there which don't speak E2EE natively, but that's because "a matrix client" can be as simple as a curl one-liner, and so there are tonnes of experimental and toy and alpha ones as well as the more mature ones which you could use to pad out a list to make native E2EE support look bad.

However, and most importantly, Pantalaimon (https://github.com/matrix-org/pantalaimon) makes any Matrix client (including all the ones in that FUDdy Reddit post) speak full E2EE - by running as a clientside daemon which acts as a friendly MITM for your Matrix traffic and offloads all the hard E2E encryption (and E2E indexing and search, and in future multiplexing multiple local Matrix apps onto one connection to your server - thus acting almost as a general comms daemon which can even be used as a self-sovereign encrypted push service).

That said, I agree that sometimes the enthusiasm of the Matrix community can be overzealous. For instance, there are some bits which we haven't solved yet, for instance:

* Cross-signing keys for one-hop web-of-trust is 1-2 weeks away from landing. It's implemented in Synapse and matrix-js-sdk, but we're in the middle of adding it into the UI for Riot/Web currently. You can see demos at the SDK level at https://matrix.org/blog/2019/06/14/this-week-in-matrix-2019-... if interested. We also need to figure out how to use cross-signing for limited transitive web-of-trust (e.g. within an closed organisation where WOT metadata leakage isn't a concern)

* We don't turn on E2E by default yet for private chats. This is because we want cross-signing to land first, for usability, and also because we don't want to lock out non-E2E-capable clients, and pantalaimon is only a few weeks old. We also want to better solve e2e-search first (by taking the tantivy-powered FTS indexer from pantalaimon and putting it into Riot). Also, we are still chasing down a few edge cases where session keys aren't available - see https://www.youtube.com/watch?v=WgikPxIjsWE for our approach to that, and https://github.com/vector-im/riot-web/issues/6779 for the overall bug.

* We don't have any equivalent of key-servers at all yet.

So yeah, we definitely don't claim to be perfect, but please don't disregard our progress thanks to a confused/malicious Reddit article.

Sovrin[1] seems pretty interesting to me for distributed/decentralized identity. They use a distributed ledger for identity and allow for piecemeal disclosure of identity data. So by default the identity data isn’t public.

[1] https://sovrin.org/

The idea expressed in that quote is utter nonsense. If it's so easy to "entirely replace (the key bits of) the standard with something much better and modern", then go build it! Talk is cheap.

If people aren't adopting an objectively superior solution that currently exists and addresses all of the current use cases, that's an entirely different issue. I doubt that's the case here though.

I think (one of) the issues is that no one has yet managed to clearly define a set of goals that satisfy all current use cases while also addressing concerns about centralization, interoperability, and ease of implementation.

PGP for pre-shared keys is exceptionally useful still. In the payments world, we use it regularly for both interactive application/message level encryption, and also when moving files between companies.

It could die for email, but still be heavily used elsewhere.

I appreciate the sentiment; the wider ecosystem is a shite show. Consider that recent RHEL releases still don't come with an rpm version that supports subkey signing, and that it took something like 10-15+ years to get that added in after it was first brought up. Then, you have gpg forcing the agent down peoples throats turning a task that should involve a static binary to sign packages into a potential life altering event... The v1 to v2 changes and differences(like the dang agent) and timeline.

It's so embedded in stuff though, even if it feels jammed in and is half sticking out, I can't imagine it going away super soon.

> However, it may be time to move away from the entire PGP ecosystem.

To what exactly?

FWIW Robert Hansen is a nice guy and a total gentleman. Met him at a number of internet freedom and privacy conferences.

> It's written in an unusual programming language called OCaml, and in a fairly idiosyncratic dialect of it at that. This is of course no problem for a proof of concept meant to support a Ph.D thesis, but for software that's deployed in the field it makes maintenance quite difficult. Not only do we need to be bright enough to understand an algorithm that's literally someone's Ph.D thesis, but we need expertise in obscure programming languages and strange programming customs.

Looking at the code [0], it looks like fairly standard Ocaml. Any particular reason it's difficult to maintain (other than the lack of popularity of FP in general)?

(It looks like the original author of the SKS Keyserver is Yaron Minsky, the guy who convinced Jane Street to use Ocaml.)

[0] https://bitbucket.org/skskeyserver/sks-keyserver/src/default...

To the commenters blaming OCaml - it is being used in the Project Everest[1] (along with F#) for creating a proven network security stack. Basically, they created ML-like language called F*[2], which fits the task perfectly. There is also a pure OCaml implementation[3] of TLS stack, along with x509[4].

[1] https://project-everest.github.io/

[2] https://www.fstar-lang.org/

[3] https://github.com/mirleft/ocaml-tls

[4] https://github.com/mirleft/ocaml-x509

I believe they are using the C target/dialect of F* for Everest.

> Any particular reason it's difficult to maintain (other than the lack of popularity of FP in general)?

A much bigger issue than the language itself is the overall architecture of the server. It uses Berkeley DB as the main database and only handles one connection at a time. So, if your gossip process starts syncing a huge spam key, you block all front-end web requests (see my issue #61[1]). Also, the keyserver is completely synchronous, so you effectively have to cluster multiple keyservers running on different ports and different databases and load balance across them if you want to add any sort of scalability to your setup.

Overall, the server code feels like an MVP or academic implementation. Definitely not designed for high scale or the ability to handle abuse like this. It would take a heavy re-write to make get the server code to where it needs to be, which is why no one has stepped up yet.

BTW, I'd love to step up and write an sks-compatible keyserver in python (using postgres as the database), so that it could scale using something like uwsgi, but so far I haven't been able to find a mentor who can help me learn the gossip protocol that's largely undocumented.

[1]: https://bitbucket.org/skskeyserver/sks-keyserver/issues/61/k...

Here in the comment however a new keyserver is presented:


by dpc_pw and Valodim

I'm a bit confused as to the point you're trying to make. Can you please elaborate?

Also according to TFA, the server apparently can cope with these pathological keys just fine, it's the GnuPG client, "production" code implemented in C, that falls over dead after it has downloaded the key from a server. Which leaves me puzzled why the server needs bashing.

It's even worse than it seems. The certificates are only a few megabytes long. https://twitter.com/FiloSottile/status/1145091106138394625

Yes, the code looks fairly simple, I would say.

Looks like fairly standard OCaml and usage of functional programming idioms. Q: Where's the CI and property tests? If you feel uncomfortable maintaining the code base, start there.

I’ve gotten stuck maintaining Python code for a testing framework and again for a CI/CD system. The fact that I know less than a junior programmer didn’t really slow me down that much, but it did make me a bit anxious.

One of those systems involved a large and obvious crypto component. If the python code had been part of that work instead of merely peripheral to it, I would have rewritten it.

Why? Because I can make python work but I can’t tell you if it’s safe. I have no idea what the weird gotchas are that look like good code but are not. What the “printf” of python is. Hackers do.

And I know even less about O’Caml. I would not sign up for that gig. Lots of the sort of rational and cautious people you want working on crypto would not.

I get that but in this case though this codebase has safety guarantees baked in via the Hindley–Milner (HM) type system whereas your python code base did not. Additionally there's a published, peer reviewed paper for this software that serves as a written specification. Those two things are fantastic resources when coming up to speed with an unfamiliar codebase.

I’m an old school strong, static typing proponent (strong typing shall rise again!) but I laugh at the notion that it protects you from crypto attacks.

Are you a maintainer or an armchair critic? I hope the latter, because if you think type safety is anything more than necessary but insufficient, then that’s number three.

Did he say it defends you against crypto attacks? It does defend you against a lot of attacks that would still endanger the system. Think denial of service attacks for exemple. Extremely easy in Python because bugs don't get caught at compile time.

> It does defend you against a lot of attacks that would still endanger the system. Think denial of service attacks for exemple.

If you believe that, then read: https://news.ycombinator.com/item?id=20313787

Not sure what's yur point

The way I read it: In practice it's easy to DOS the server even if it's written in that language for which was here claimed to have a property of protecting from DOS. Especially due to the pooor scalability of it.

Also, there are few enough tests that I had to do a text search for the word test to spot them. How many ml files in the top directory, and the word test appears nine times.

So that’s the second big problem with this code, and it’s a huge one. The crypto project I worked on had better test coverage than anything I ever did before and quite possibly since. Because it was a dangerous animal and, like a responsible exotic pet owner, I treated it with respect at all times. Unlike the guy I took over the project from.

And because of my paranoid fastidiousness, I stopped a user from shipping with only 8 bits of entropy in their key generator. That would have been a fun bug for a DEFCON presentation.

The author of the gist has his CV on his personal website and lists himself proficient in fsharp. :confused:

That wasn't an argument... It was an expression of confusion as to why the author would say what they said.

It's not. This thing:


is not an 'expression of confusion', it's an attempt at offtopic shitstirring and mockery. It's kind of dumb in the gist - keeping it up on HN (where it's sensibly forbidden) is worse.

So one person said yup, and the other person said nope. ;-)

I agree it served to make the author look a bit foolish.


It looks like other OCaml code bases. Hardly a good thing.

My attempt to understand this - please correct liberally

Things I know today that I did not know yesterday

- The GnuPG (GPG) ecosystem seems to suffer from pre-heart bleed-OpenSSL levels of not enough investment and people

- The GPG ecosystem has a trivial DOS attack that can be mounted against it, with bad actors able to append thousands of keys to any users "account" effectively making it impossible to read that account, thus making anything signed by that account impossible to verify

- This may or may not mean that major distributions binary packages will simply stop being verifiable - it depends on who uses what key server in what chain of trust. We probably won't find out till more bad actors poison more wells

- This has been "well known" for some time but the solution is not obvious

- It seems that this is the reason keybase works like it does : if a user simply attests that key X is theirs in a second channel you can trust that as much as you trust the channel. it a key server is the only channel and for reasons will not delete the 150,000 bad keys

- There are many alternatives to GPG it seems - or at least to the sub-functions under its "brand". Signal to send message or minisign to sign documents - it do they have the same "OpenSSL" lack of support in them?

- Don't the million dollar companies like DocuSign use GPG?

So that's me - trying to work out if this is the end of the world or a storm in a teacup - thoughts welcome :-)

This roughly aligns with my understanding as well. The additional takeaways I'd add (that may or may not be accurate):

- Many security researchers disagree with the core idea of SKS servers in general (they're essentially just undeletable online storage that anyone can write to). The distributed "Web of Trust" model itself is considered untrustworthy.

- The vulnerability is triggered by the usage of the SKS servers. This is very bad for any piece of infrastructure that relies on them, but if you're only using local keys that you imported and verified yourself, this particular attack doesn't effect you.

- The PGP format itself is cumbersome and has problems (people want shorter keys, and they want a simpler format with less variability). So while this particular vulnerability only effects SKS servers, there's still a strong movement to get rid of PGP in its entirety.

- The fact that the people behind the SKS servers are reacting negatively and angrily may be reason to be worried about GPG in general, since we don't know if the maintainers would respond the same way to other vulnerabilities that aren't restricted to SKS servers.

Similarly, thoughts or corrections welcome.

Agreed - It would be nice to understand the dependency chain for SKS servers

And Inwoukd also like to understand what people mean by "get rid of GPG/PGP?" - it cannot mean get rid of keypairs, so it is just replace with some "nicer" code? what is the problem?

> - This may or may not mean that major distributions binary packages will simply stop being verifiable - it depends on who uses what key server in what chain of trust. We probably won't find out till more bad actors poison more wells

Debian keys come from keyring.debian.org, so ??? I'm guessing that chains of trust from there go through the SKS keyservers. If that's so, Debian updates will likely be hosed at some point. Unless you disable authentication.

I have noticed that Whonix now includes onion links to repositories. So maybe that'd be safe enough without GnuPG authentication. Yes?

Edit: It does look bad for the Debian family:

A recent guide[0] recommends pulling missing repository keys from hkp://pool.sks-keyservers.net:

> sudo apt update 2>&1 1>/dev/null | sed -ne 's/.NO_PUBKEY //p' | while read key; do if ! [[ ${keys[]} =~ "$key" ]]; then sudo apt-key adv --keyserver hkp://pool.sks-keyservers.net:80 --recv-keys "$key"; keys+=("$key"); fi; done

Just tweak that a hair, and you have a list of all Debian package keys in the keyserver. How long before some jerk hits them with trollwot? I wonder how many millions of Debian family installs could be blocked from updating.

Blame the SKS people, the Debian people, or whatever you like, but this could turn out very painful.

The only bright side, which seemed like a bug until this shit show, is that Debian etc by default don't search for missing keys.

0) https://www.linuxuprising.com/2019/06/fix-missing-gpg-key-ap...


Debian installations come with preinstalled keyring with the archive signing keys. Upgrades to that keyring are provided via packages, which are signed with the previous archive key. The same for Fedora and rpm. Public keyservers or web of trust are not involved.

On the internet you can obviously find all sorts of bad guides written by random people.

That's very good to know. Thanks. I'm far less freaked.

But what about package build chains? Are there ever (or at least commonly) calls to SKS keyservers?

> - This may or may not mean that major distributions binary packages will simply stop being verifiable - it depends on who uses what key server in what chain of trust. We probably won't find out till more bad actors poison more wells

All distributions I know use a pre-shared keyring for package signing, distributed on the initial installation media. Public keyservers are not involved.

This is unaffected by any issues with web-of-trust and public keyservers.

Gnupg and Openssl didn't/doesn't suffer from lack of funding, they both suffer from hideous hacky code and money cannot fix that. There was a reason libressl gutted Openssl and created another API.

>Any time GnuPG has to deal with such a spammed certificate, GnuPG grinds to a halt.

So the SKS software is only a part of the problem. Another part is GnuPG, which is unable to deal with a public key with many signatures attached.

GnuPG is written in C (not OCaml) and seems to be well maintained. Looks like fixing it can be an effective mitigation against this attack. Or do I miss something?

Not sure how you could fix an OpenPGP client for this case without changing how the keyservers function.

"You're trying to pull more than reasonably supported 1k signatures. Do you want to skip this step?"

This will protect your machine from being DoS’d, but what if your key is poisoned? Nobody will be able to use it.

Optimize to handle 150K signatures in reasonable time.

Is the new berifing SKS from Sequoia PGP (written in Rust!) affected as well? https://sequoia-pgp.org/blog/2019/06/14/20190614-hagrid/

It is not! That's why the author recommends using the running instance at keys.openpgp.org, as a replacement for the sks pool.

Note that Hagrid is not an "sks in rust". It is different in a lot of ways, see https://keys.openpgp.org/about/news#2019-06-12-launch

(disclaimer: I maintain keys.openpgp.org)

It's fascinating that the conversation in the GitHub comments went to both personal attacks on the author of the post and defending child pornography in the span of like five responses.

I feel like part of the problem is that anyone who's skilled enough to implement solutions has better things to do with their time than participate in a discussion of that quality.

Hmm. Has a site ever experimented with separate comment sections? Put simply: on vs. Off topic (or maybe "meta") . Our comments would be in the off topic section, for example.

I regularly see people apologizing for being off topic. Clearly they have something they think is worth saying, but are afraid to pollute the discourse.

I think this falls into one of those social problems with no technical solution. The culture of PGP, as described in the post itself, is that anyone can participate, anyone can upload signatures, anyone can run keyservers, there's no way to remove uploaded keys or signatures ever, and there's no central authority for what is on the keyserver network. Given that culture, there's a social expectation that if you're not going to let the pedophile with the anime avatar have the rest of his arguments taken at face value, you're censoring people and might as well go use one of those corporate sell-out encryption systems like Signal.

(To your actual question: meta.stackoverflow.com, Wikipedia talk pages, meta.wikimedia.org, etc. Also various email/chat communities have defined off-topic lists/rooms. When the participants do actually want to keep off-topic discourse to the side, and the off-topic discussion isn't an attack on the competence of someone reporting a problem or a desire to propagate child pornography, then it's merely a technical problem of enabling them to do it.)

The author sounds a bit over dramatic and with many logical jumps, ocaml might not be in the news and what cool kids use this days, that doesn't make it a bad language. The base code doesn't look impossible to restructure. The attack vector is know for long time. Also it's not like we haven't seen before important code being pretty much unmaintained written in far more popular languages e.g openssl.

SKS don’t implement any modern web security features (the whole thing does pre-date the web!) — as a result, you can (ab)use SKS as a free backend store for your JavaScript apps: https://www.quaxio.com/message_board_over_pgp_key_servers.ht...

Isn't WKD supposed to help out with key distribution for email?

* https://wiki.gnupg.org/WKD

* https://tools.ietf.org/html/draft-koch-openpgp-webkey-servic...

I'm thinking that's a better way to publish keys these days anyway.

I have my own domain, so maybe OPENPGPKEY record in my domain as well

DNS-Based Authentication of Named Entities (DANE) Bindings for OpenPGP


It would be a better way, but the technology hinges on support by e-mail providers. I wouldn't recommend holding your breath.

The other contender is Autocrypt, which performs key exchange inline in emails in an automated fashion. It only depends on client support, and has gained at least some traction (enigmail, k9, mailvelope, gpgOL, delta.chat, and some others).

> It would be a better way, but the technology hinges on support by e-mail providers. I wouldn't recommend holding your breath.


> The objective of the project was to develop new mechanisms for the reliable and automatic public PGP key exchange between e-mail providers. The results have also contributed to the WKS/WKD standard that is part of the GnuPG project.


I was looking at going with mailbox.org or possibly protonmail (though they don't have calendars at the moment and I use that), both apparently support it. As does Thundebird+Enigmail, K9/OpenKeychain.

I have noticed the AutoCrypt method.

Part of the reason I changed my email is because I had in the past submitted a few keys to the sks network which I lost the private keys to, they were also submitted with an infinite expiry. I was a stupid kid.

So I am unlikely to submit my new keys to the sks network. Just store them in my domain, WKD, and on my blog.

> Part of the reason I changed my email is because I had in the past submitted a few keys to the sks network which I lost the private keys to, they were also submitted with an infinite expiry. I was a stupid kid.

That's why most recent versions of GnuPG automatically create keys with expiry set to 2 years.

> I have my own domain, so maybe OPENPGPKEY record in my domain as well

> DNS-Based Authentication of Named Entities (DANE) Bindings for OpenPGP

WKD has some benefits over OPENPGPKEY - it keeps the request confidential (as WKD uses plain HTTPS). WKD is just easier to get right, that's why it's more broadly supported. GnuPG, that supports both of them, defaults to WKD. If OPENPGPKEY request is made it seems GnuPG doesn't even validate DNSSEC signatures: https://lists.gnupg.org/pipermail/gnupg-users/2011-December/...

The new keys.openpgp.org service is a mitigation:

> keys.openpgp.org is a new experimental keyserver which is not part of the keyserver network and has some features which make it resistant to this sort of attack. It is not a drop-in replacement: it has some limitations (for instance, its search functionality is sharply constrained). However, once you make this change you will be able to run gpg --refresh-keys with confidence.

I know the folks behind this and I think they’ve approached it thoughtfully and realistically.

It’s using a modern OpenPGP implementation and language (Sequoia, Rust) which is a big win. Despite it being centralised, I’d encourage folks to have a look.

On that issue, SKS has become so troublesome to run that the number of peers has steadily decreased to the point where there are only 2 entities running the HKPS (“secure”) pool, so in reality SKS is centralised too, as well as unmaintained. Source: I run a key server and a key expiry reminder service.

Out of curiosity, which is more obscure: OCaml or Rust?

Probably OCaml. While it's been around much longer, it's never really reached mass acceptance (though does get used here and there). Rust is newer but I'd estimate it's already more used, and its adoption in industry is growing quite quickly.

This is probably to do with the fact that OCaml doesn't necessarily solve any problems that are apparent to businesses, whereas Rust solves the very apparent "manual memory management makes massive vulns trivial" problem.

I'm not sure which language is actually more approachable for someone trying to learn it from scratch though.

OCaml is very popular in academia though, especially in the field of theoretical computer science and formal verification. Coq, Frama-C, Flow, CompCert, etc are all written in OCaml. Heck, if you are running a graphical GNU distribution chances are that you have installed FFTW, which is written in OCaml. The "industry" is not the only thing that matters when considering the adoption of a language.

Also a Mirage OS/unikernel, BAP and BinCat binary analysis frameworks, Facebook Infer source-level static analyzer, etc.

Reason (the frontend framework/language by Facebook) is OCaml.

Have you used Reason for anything serious? How was it?

Particularly if your talking about a quasi-adademic PGP community.

Fun fact: Rust’s initial implementation was in OCaml.

Right? I was reading about this and thinking "hey, maybe technically sound but less well-known languages like OCaml or Rust are not always a good choice".

The question is not obscurity, the question is security. And there ocaml wins by miles over rust.

Are you talking about the compiler itself? Or the tooling and the libraries that you have to deal with when you try to actually use the language?

A bit more[1] on consequences of this attack, gist from the same author.

[1] https://gist.github.com/rjhansen/f716c3ff4a7068b50f2d8896e54...

This seems a bit "shoot the messenger" to me. If anything I think those efforts should be applauded as the signal flares they are: "this is broken and it's only a matter of time until it has real world consequences".

Seriously. The original post states at least three times they knew about the issue for over a decade. At what point does "full disclosure" become valid if not after over a decade of warning?

The fact that the chosen keys in this attack were not Mozilla or whoever and instead maintainers suggests to me that this "attack" is someone who, having seen this vulnerability left unresolved for a decade, decided to force the issue before someone used it for really nefarious purposes.

It's a black hat solution and not a nice thing to do at all. It's probably the wrong thing to do. But, is this really worse than waiting for someone to use this in earnest? I'm not sure.

Wow, there's much more:

>Special criticism goes to the Electronic Frontier Foundation, which paid Micah Lee to publish premade attack tools to exploit these design misfeatures in the keyserver network


>January of last year — January 16, 2018 — one user threatened to do what pretty much happened this week.

To be clear: is this suggesting that it is currently entirely unsafe to update any operation-critical equipment? It seems that now that the PoC is out in the wild, it will be a matter of days/hours before someone hits a major contributor to the major Linux distros; and all package managers begin to fail.

I don't think this is correct. Debian and its derivatives, at least, use a separate keyring for apt than the rest of the system uses. Though Debian does have a keyserver, pushes aren't automatically added to the user-facing keyring; they are manually moved over by a keyring maintainer who would presumably notice someone with a multi-megabyte key.

Ubuntu runs its own SKS keyserver (keyserver.ubuntu.com) and apk-add-repository use it when adding a PPA repository. I think in theory it's still possible to break package manager in Ubuntu if someone decide to poison a popular PPA repository/key.

PPA repository keys are generated by Launchpad; mere Launchpad users cannot inject arbitrary PPA repository keys.

Package managers don't use keyservers to get their keys. Keyring updates are usually shipped as package updates or through a different mechanism. And distro media embeds the signing keys in the ISO to avoid the TOFU problem.

Yes this seems like it's going to range from very bad to outright terrible. In particular if the issue propagates prior to patching something like gpg as the package managers would be locked out from updates.

Every time there's an article about IoT security there's a discussion about lack of (security) updates and an upcoming Armageddon. Interestingly it'll be those devices that do not update that are immune to this type of thing.

Package managers don't use keyservers to get their keys (at least, not for the distro's repos). I believe Ubuntu might use them for PPAs, but openSUSE doesn't use them for any OBS projects (and I'm pretty sure this is the same for any RPM repo).

>> immune to this type of thing.

The _only_ type of thing that they are immune to. This is like saying “a car with a broken engine is the safest car in the world - it never moves!”.

I think the standard analogy for this sort of thing is "a broken clock shows the right time twice a day".

Sibling comments have discussed how this affects Debian, Ubuntu, and opensuse -- any Arch users know how this affects us? Seems like official repos should be fine but what about packages from the AUR?

I don’t think the AUR has a concept of package signing—a PKGBULD will often download a tarball from somewhere and any signing is ad-hoc. The official repositories use their own key ring (which is distributed without a key server).

Correct. The only time when this would concern you is when you add a third-party repository, e.g. one of [1]. This usually involves a manual TOFU step where you do the equivalent of `gpg --recv-keys $ID` on the pacman keyring.

[1] https://wiki.archlinux.org/index.php/Unofficial_user_reposit...

Beware that "gpg --recv-keys <keyid>" (or even "gpg --recv-keys <fingerprint>"!) can be tricked into inserting malicious keys into the keyring:


My /etc/pacman.d/gnupg/gpg.conf had this line:

  keyserver hkp://pool.sks-keyservers.net

> unusual programming language called OCaml

> obscure programming languages

Huh, I knew OCaml was less popular; I did not know it was "obscue". Doesn't Facebook use OCaml?

There's at most hundreds of people writing OCaml for non-academic purposes at least once a week.

I believe this is a project trying to replace the old school key servers.


Is there any piece of distributed internet infrastructure that Google is not trying to replace?

Probably not, it does make sense for Google though that they do not want to be stuck with mission critical stuff they don't have the final authority on and which is ... obscure in both code and people.

I have maintained keyserver in the pool for many years[1], and attacks like this are, in theory, able to be easily mitigated. I mean, we're only talking about appending spam to public keys. It's not like the attackers found some flaw in OpenPGP that breaks public keys or valid signatures. In theory, all these attacks to do is cause bloat on keyservers.

So why can't we deal with these types of attacks? IMO, the main reason is the the server code.

The author mentions that the sks keyserver is "written in an unusual programming language called OCaml", but IMO the language isn't the main issue. Here's a rundown of what I think the main issues are:

* Non-scalable database - SKS keyserver uses Berkeley DB as the database for storing keys, and bdb can't handle more than one connection at time. So, when huge keys with lots of spam signatures are being written to the database, it blocks all other transactions, including serving web requests (see issue #61[2]). The server needs to be re-written to use a database that can handle multiple connections (e.g. Postgres) so spam writes won't affect the web responses for unrelated keys. Unfortunately, the server is currently single-threaded and synchronous so it would have to be heavily re-written to add this kind of async scalability.

* Undocumented gossip protocol - The gossip protocol used is a very efficiency syncing algorithm that lets servers fill in the gaps for their databases from other servers. Unfortunately, the only documentation I can find on the gossip protocol is an academic paper[3]. This is a huge barrier for someone wanting to write a sks-compatible keyserver. You basically have to read the OCaml code (which doesn't have comments) to figure out how it works. For years I have wanted to try to write an alternative sks-compatible keyserver that could handle attacks like these better, but as it stands now, I'd have to spend a huge amount of time learning OCaml and reverse engineering the code just to understand how to write a compatible gossip implementation. Every once in a while I ask the mailing list for a mentor in helping teach me the gossip protocol[4], but so far either no one knows it or wants to teach it to me.

* Few validation and revocation features - OpenPGP is very flexible for how you can build public keys and signatures. I think there's a lot of creative things you can do with signature packets to let keyservers and public key owners clean spam out of their signatures. For example, OpenPGP signatures let you specify a "Signature Target"[5] in a signature, which could be used to let public key owners denote other signatures on their public key they wish to revoke, which the keyserver could then stop serving up to public requests. However, since the current server code is largely unmaintained, we can't implement some of these cleanup options.

Overall, I really love the idea of having a public, open, decentralized keyserver pool. Unfortunately, sks-keyserver wasn't written with much ability to scale (which was totally appropriate at the time), so it's in desperate need of being re-written. I'd love to do it, but simply haven't found the time to reverse engineer the gossip protocol. If someone out there wants to mentor me through it, I'd happily write an sks-compatible keyserver that could operate in the pool and also deal with these types of attacks.

[1]: https://sks.daylightpirates.org/ (currently down)

[2]: https://bitbucket.org/skskeyserver/sks-keyserver/issues/61/k...

[3]: http://ipsit.bu.edu/documents/ieee-it3-web.pdf

[4]: https://lists.nongnu.org/archive/html/sks-devel/2016-08/msg0...

[5]: https://tools.ietf.org/html/rfc4880#section-

I don't think that the base issue lies in the implementation: you can make it as scalable, parallel and document as you wish, but if what you are doing is basically receive data from whoever sends you something, store it, pass it to other server and do not offer any form of accountability over who can store such data or who can delete it, it will always be trivial to write a script that just sends lots of data and bloats your service. The problem lies in the architecture itself.

Hmmmm, isn't that an inherent property of any for-public-use database? By your definition, any public pki (keybase.io, keys.openpgp.org, etc.), social network (Twitter, Facebook, Mastodon, etc.), and more are vulnerable to someone just writing a script and bloating their database.

What mitigation strategies do other ecosystems use? Why can't they be tried in the keyserver pool?

This is not a complete, imperfect, optimal, uncontroversial or always-trivial-to-implement list, but some common ways to increase attacker costs are to:

0. Put a CAPTCHA on expensive/abused functionality.

1. Rate-limit costly transactions to 1 per hour/day/etc (whatever's appropriate) per IPv4 address.

2. Limit total amount of data added to the db per time period per IPv4 address.

3. Iff you get a lot of abuse from VPS/cloud providers, block or even-more-severely-limit their published IPv4 ranges. Generally speaking a normal user will not write to a pubkey db from a cloud IP.

4. Iff you get a lot of IPv6 abuse, either go IPv4-only (no doubt this will make some people super-mad.. but when it's the only way to keep the service operational..). Sometimes treating every /64 as roughly equal to one IPv4 address is a sufficient defense.

5. If you don't like using IPv4 as the scarce good, then use some other primitive such as SMS verification of a phone number (that may be unacceptable for sks due to obvious privacy and highjacking concerns.. but it's basically what Signal does..)

6. Users (and environmentalists) will hate it, but if all else fails, require proof-of-work/hashcash. Periodically expire keys that didn't submit a $1-10 POW ticket each year, etc. Or an equivalent minable cryptocurrency payment.

They certainly are vulnerable. The solutions for social networks are in client/behaviour analysis: Are you trying to create 10th account from the same IP? Are you creating multiple accounts with the same browser fingerprint? Have you got any personal details attached? (That's one reason they started pushing phone verification) Is the action automated? (CAPTCHA) Is your friend graph a clique of fresh accounts?

A lot of these can't be applied to SKS unfortunately.

You should create multiple account with different browser fingerprint for each account. I usually use Kameleo software to load different profiles with manipulated browser fingerprint https://support.kameleo.io/article/what-is-browser-fingerpri...

Wow, that is quite the KCF. This is the core problem:

> The [SKS] software is unmaintained. Due to the above, there is literally no one in the keyserver community who feels qualified to do a serious overhaul on the [OCaml] codebase.

The solution is simple: don't use the SKS keyserver network.

> High-risk users should stop using the keyserver network immediately.

I used to use it, but mainly I just send people signed messages, and ask them to send me their public keys. I point them to my Keybase page, in case they prefer to encrypted key.

But that isn't generally practical. So about the mitigation.

> Users who are confident editing their GnuPG configuration files should follow the following process:

> Open gpg.conf in a text editor. Ensure there is no line starting with keyserver. If there is, remove it.

This part makes sense.

> Open dirmngr.conf in a text editor. Add the line keyserver hkps://keys.openpgp.org to the end of it.

I'm not sure whether that's necessary. For Debian users, adding keyring.debian.org makes sense. But otherwise, isn't it best to get keys from first-party sources?

Now may be a good time to plug a project we worked on at my last gig. KeySpace uses IPFS to store PGP keys in a decentralized file system. We used a smart contract on the Ethereum blockchain to store an address-hash lookup. What this achieves is fully decentralized peer-to-peter encrypted communication. We used it to facilitate trustless OTC negotiation and trading.



Ooh er, missus

>unlikely to be discovered until it breaks an OpenPGP installation.

Why can’t you just brute force this? ie rest all of them vs a pgp install and see which keys break it.

At least you can quantify and pinpoint the poisoned ones then

Can somebody explain to me a legitimate use for a single key cross signed by e.g. 100,000 other keys? Is the goal group communications?

I get that arbitrary limits are bad. And that an attack on a system can be converted to an attack on a key, I'm not seeking mitigation in this question.

I just want to understand in some six degrees of Kevin bacon manner, if there is a real use for a single large sign set rather than eg a merkle tree of decomposed subsign sets?

The suggested mitigation (editing `gpg.conf` and `dirmngr.conf`) doesn't seem to work for me. In particular I created `~/.gnupg/dirmngr.conf` with a line for the `keys.openpg.org` keyserver (and don't have a `~/.gnupg/gpg.conf`), but `gpg --refresh-keys` still uses `hkps://hkps.pool.sks-keyservers.net` which the gnupg Info (section 3.2 Dirmngr options) says is the default.

Did you `killall dirmngr`? :)

Thanks that solved my issue :)

that affects the onionbalancers as well then :( https://dev.gnupg.org/T3392


Efail was one of the best crypto breaks of 2018, accepted into both Usenix Security (a top-tier academic venue) and BHUSA (the top tier industry venue), and virtually universally lauded by actual cryptography engineers and researchers.

What you've said here is false, a personal attack on the researchers, and absolutely unacceptable on HN. Take this stuff somewhere else.

This is why we can't have nice things.

I think that decentralized services, like SKS, require a sort of fee mechanism (proof of effort, currency, or otherwise) that can prevent bad actors from inserting malicious material. I definitely think this is one of the cases that blockchain technology makes a lot of sense, and the disincentive of a fee would do very will to mitigate this.

How would a proof-of-work system, let alone a blockchain, work?

Apparently, GnuPG breaks badly at 150 000 signatures. You want adding a signature to be doable on a really old laptop and/or low-end Android phone; a motivated attacker can just choose to expand 1000 000 times as much effort as a not-too-interested user on antiquated hardware.

Of course you can make adding a signature more expensive the more signatures are already there, but that lets a motivated attacker make it impossible to vouch for certain users (keys.) Etc.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact