Debian is basically the reason we didn't delete PGP signature support in 2018, because in theory some handful of Debian packages might be better off with this support.
In practice I think the handful of packages that this happens on isn't particularly meaningful or valuable, and in my experience what happens when Debian finds a new key signing packages is.. highly variable. I've seen them just disable signatures when a new key shows up (on major packages even), I've seen them just blanket copy whatever the new key is, I've seen them look at release notes for what the new key is. In one or two cases I've seen them actually track down the project and ask for verification of the new key.
To me, the PGP support in Debian's uscan feels more like security theater than actual security controls, given my experience with the varied responses to a new release being made by a different key.
Even just as a fancy checksum I think it has some value, and I think it does serve some security goals. I feel like this article also kind of amounts to listing off a bunch of threats that aren't being properly mitigated, and it doesn't really do a good job of separating which threats are not mitigated because of intrinsic deficiencies in PGP from which threats are not mitigated because other parts of the system haven't even made an attempt to mitigate the threat.
And then there's the third category, which is "threats which could be mitigated but PGP makes this hard so people tried and gave up." Just giving up entirely doesn't seem like the right reaction here, and that seems to be what the PGP downers are advocating.
There are other efforts underway to mitigate these threats (which could be subject to their own critiques, but let's not get into that here) but PGP has had 20 years to prove its utility in this area and it has resoundingly proved that it (A) does not address the threats it purports to and (B) introduces tons of confusing complexity into processes which are not benefiting from it.
Let me restate that: it is not free to continue supporting PGP. It has a tremendous cost both in its own maintenance and its opportunity cost. Every moment spent attempting to mitigate its fundamentally broken design is a moment that could instead be put into designing something new, that works properly and doesn't require dragging around the massively bloated corpse of 1999-era cryptographic engineering.
If people want checksums, it seems like it would be better for everyone around to just use checksums? If you just want to make sure that some file hasn't changed, that's a much better primitive for that then signatures.
To me the most interesting part of the article is whether or not the current signatures are even capable of being validated or not, which the answer to that is > 50% of them are not, and those are of the people who care enough in the last 3 years to still be using this undocumented feature.
Is it possible to build a secure signing system ontop of GPG/PGP? Sure. But doing that requires working around or eschewing so many features from it that you might as well just use the base primitives yourself rather than being tied to GPG/PGP.
Also, if you really want a "fancy checksum" for verifying package installs, there's already a better feature in pip that actually works and is well-supported: https://pip.pypa.io/en/stable/topics/secure-installs/
It seems like the fundamental problem is that clients don't actually verify the signatures. It seems to me like the first step is pip and other clients should be updated to have a verify mode, and PyPy should start flagging packages that don't verify.
There are a lot of arguments in here that PGP is fundamentally broken, but it's a mature PKI system, and it seems to me like PyPy should stick with it and use it properly. If someone has a different PKI system that they think works better, by all means put together a PEP but "let's just do away with PKI because our implementation is half-assed and bad" doesn't really seem like a good path.
It’s PyPI, not PyPy. The latter is a Python implementation, not the package index.
Adding verification to pip makes the underlying problem worse: the keys themselves are weak and misconstructed, meaning that “verification” provides a false sense of security.
Finally: PGP is not a mature or successful PKI ecosystem, in any sense of either word. The only marginally successful PKI schemes using PGP are those in Linux distributions, and these benefit from constraints (trusted signer sets, baked in key deployment) that PyPI and other ecosystems can not provide.
I'd argue that the benefit of package signing is only ever going to be marginal, but that's not a reason not to do it, it's a reason for PyPI to fix their processes.
Weak and misconstructed keys should cause the packages to fail to upload to PyPI. Obviously we can't just flip that switch today but that's the direction we should be thinking.
> Weak and misconstructed keys should cause the packages to fail to upload to PyPI.
This is much easier said than done: keys aren’t uploaded to PyPI at all, only signatures are. This would imply either (1) requiring users to upload their public key ahead of time, or (2) PyPI trawling public key servers in hopes of finding a matching fingerprint. Neither of these is acceptable from an operational overhead standpoint, and the latter is both insecure (birthday attacks on PGP fingerprints are trivial) and requires participation in a half-dead keyserver ecosystem.
This is a universal problem in most of package repositories regardless of language. PKI is difficult, and for it to work the users have to do some homework and place trust somewhere.
In a healthier OSS ecosystem, we'd have people counter-signing packages after they've vetted them. If Google approves something through to production, then it's probably okay for you too.
This wouldn't be a bad extension to the GNU license really - requiring reciprocal review.
Yeah, that could work, at least for larger orgs. Not so sure the majority of users would comply with such sn extension, or even have the knowledge to review for eg. security issues. Having a working PKI solution that could work regardless of ecosystem would be awesome. If nothing else, I can research which key eg. Microsoft uses, and then allow anything signed by that key as an initial threshold.
Making it a requirement of using something commercially would add a lot of transparency though. The concept of "software bill of materials" has increased interest now, and this would be a part of it: if you're using something then you sign it and publish the signature which then declares an acknowledgment that it was reviewed in some way.
Absolutely, but I fear such a solution would lead to a lot of people signing just to be compliant, not beacause they did a thourough job reviewing. If we could connect it to a reputation somehow, it might have something going for it
You can already (and since a while ago) verify GPG signatures when installing Python packages.
Nobody does that. The biggest problem the author didn't really relate to is that most Python packages come with no signature what so ever. So, what are you going to do about those packages if you wanted to verify the rest? -- just verify those that have signatures, and leave those without signatures alone? -- That just sounds ridiculous...
By and large, PyPI is the same story we had with HTML: incompetence that breeds more incompetence. The same way how Web browsers encouraged crappy Web developers to generate ever more broken HTML by "helpfully" fixing the broken parts, PyPI by not enforcing the good practices created a community of people who now don't even understand what the problem is.
Fighting this is hopeless, because security is by far not the biggest reason why people download and use Python packages. So, even if security sucks, the users, who, ultimately should've been the ones responsible for their security won't do squat. Instead, maybe, at best, there'll be some broken / partial solution from PyPI.
gpg signatures are not, and have never been a meaningful "best practice". The only secure package signing schemes that have ever used gpg signatures, uses gpg as a mechanism to access crypto primitives and has built their own secure system on top of it.
They're using gpg in spite of its features, not because of them, and would almost certainly be better served by using something else.
GPG/PGP is worse than nothing, because it provides an illusion of security in the majority of use cases unless you build significant infrastructure around it to the point that you're really just using it for access to it's crypto primitives.
GPG's Web of Trust cannot answer the question of who is trusted to sign for a particular package on PyPI. At best it can tell you that a key that is signed by someone whose key you've signed. That is not a meaningful security control. Practically nobody is signing GPG keys thinking "would I trust this person to sign for every package I might ever want to download" and they are instead at most trying to verify the identify on the key matches.
It's existence creates a bunch of people who insist in trying to take up the oxygen in the room anytime serious security design is trying to happen to try and shoehorn gpg in places where it has no business being.
Following "Best practice" is doing what everyone else does, just because everyone else is doing it.
"Best practice" is cargo culting.
Everyone's got different requirements and constraints. If one thing is apparent, it's that pypi doesn't have in a requirement that packages are signed, because no-one (publishers and clients) is currently using them properly. And yet, python (and pypi) is alive and popular.
If that's the case, the "best" package signature scheme is none at all.
The real question: how can pip still not allow me to verify a downloaded package with a public key of my choice (PGP key or others, the format is irrelevant)?
In most cases, I don't even want to know who the author is, but just be warned if the signing key changed since last time (possibly because pypi has been compromised, which is the elephant in the python ecosystem room).
This minor feature has been blocked for years on ideological basis (and matched to questionable decisions like hiding on purpose the signatures from the pypi UI), on the grounds that keys are not discoverable and not attached to a strong identity (which happens to be an argument close to what this article say).
Just add that possibility, make it opt in, and allow the ecosystem to figure how out solve the key management/trust/identity problem.
IIRC Google solved this recently in the Play Store by signing packages themselves. But the cost was that it's much harder (impossible?) for me as a package maintainer to be sure that the package I built is the one I'm distributing; it means now Google can alter the package and nobody will be any the wiser.
I think a lot of the trouble with this article (and the article it links about why PGP is broken) is that it doesn't specify a threat model. It makes some good arguments why specific threats are not mitigated, but rather than attempting to fix broken processes or replace specific components it suggests just throwing out the whole idea.
And some of the arguments seem to point toward using a Google-style central packager/verifier but there are good reasons PyPy doesn't do this and many wouldn't want them to.
Sequoia PGP is pretty nice, but the issue is not so much the signature/encryption library, or even the interface, but the key management (after all, as far as the end users are concerned, PGP signing and encryption are simple commands). If this was easy, there would have been many solutions by now. Integration with a hardware-based KMS service will go a long way in improving the security. I don’t think the algorithm or file format is the main issue here.
PGP is an unfortunate obsolete technology, which sadly failed to evolve with the times.
I think the main thing that crippled PGP is tooling. What we have available is GnuPG, and GnuPG is horrible for every use case but what it was made for: A command-line interactive application, made for a single person's personal usage.
GnuPG insists on a model where it manages everything: key parsing, a key database, and the entire signing/verification process. There's no proper libgnupg either, what gpgme does is calling gpg, and presenting a library interface.
The problem with this is that GnuPG is very unfriendly towards attempts to build something new from the pieces. It really wants to be a commandline application that deals with a single person's encryption/decryption activities. So if you want to say, run statistics on PGP keys, or write a keyserver, or use it to implement signatures for packages, either you do it the GnuPG way, or you very laboriously trick it into doing what you need, or you write your own crypto code.
Option 1 isn't great if you want anything weird like statistics gathering.
Option 2 is just awkward and messy. Both 1 and 2 quickly run into limits. GPG for instance doesn't like trying to work with a database of a million keys. It has heavy startup costs. It's made for humans, not for any kind of heavy lifting.
Option 3 sucks because OpenPGP is extremely complex, and writing crypto code is very ill advised for most people.
And I think that's kinda what crippled the ecosystem. Doing anything but what gnupg wants to do has extreme startup costs, so few people ever try, and most people do it badly.
Doing what the author did here involves really going out of your way -- in a better world we'd have simple to use tooling to do this work, and PyPI would just do validation and reject bad stuff, but doing so is extremely non-trivial in PGP.
It seems like without fail, whenever PGP is mentioned, someone is ready to offer criticism of it, usually without offering a better alternative, since one doesn't really exist.
Yet it is the most stable, the most open, the most proven, and the most supported PKI technology, supporting both encryption and signatures.
I found GnuPG and OpenPGP.js are incredibly easy to integrate into my project. I didn't like the current ways that it is used, so I designed a new way, with disposable private keys, which frees the user from key management burdens. I don't use many of PGP's features which I don't need, such as key management and nested keys.
PGP is still an incredible standard and technology, the rare kind that's remained supported and (mostly) backwards-compatible for 25+ years, like HTML and Perl.
I'm glad it exists, and I anticipate it being useful for at least another 25, thanks to the Lindy Effect.
> It seems like without fail, whenever PGP is mentioned, someone is ready to offer criticism of it, usually without offering a better alternative, since one doesn't really exist.
For signing minisign, signify, and OpenSSH exist; most of them have been around for quite a while (OpenSSH signing is the only one that's relatively new).
They're not "alternatives to PGP" in the sense that they will do everything PGP can (notably, no encryption). But that's not really a bad thing, IMO, and for encryption alternatives exist too.
One of the problems that ecosystem support is often lacking, both for signing and encryption. git can sign with OpenSSH since last year, but that's one of the exceptions. But that doesn't mean alternatives don't exist.
Ecosystem support is hard; you need to convince untold authors of software to implement your new crypto thingymabob, and there's a "well, no one supports it yet, so why should I?" catch-22 here.
> It seems like without fail, whenever PGP is mentioned, someone is ready to offer criticism of it, usually without offering a better alternative, since one doesn't really exist.\
And arguably, it shouldn't exist. Modern thinking is that encryption is integral to a project and how it's implemented depends on what the project does.
The old way is: Secure mail by passing it through PGP. Secure messages by passing them though PGP. Secure packages by passing them through PGP.
The new way is: secure stuff by designing a system specific to what you need. So for instance, encrypting IM gives you the https://en.wikipedia.org/wiki/Signal_Protocol which has a whole bunch of complexity you wouldn't have for signing RPM packages because it needs it, while RPM doesn't.
PGP is much more than email. It is a general-purpose text and binary encryption system with FOSS libraries and user-facing software for every major platform.
It can be used for email, IM, package signing, authentication, and many other uses.
And it is time-tested, with 25 years of library-building and 25 years of people trying to break it.
Yes, it has many features which most projects don't need, and anyone is free to not use those features. Most documents I write do not include tables, but I would not argue that tables are an unnecessary feature.
And same with PGP, I don't use most of its features, but I'm OK with them being there.
25 years of people trying to figure out how to use it. I think the population of PGP users has some extreme selection pressure acting on it--it's a bunch of people who want security and don't care how bad the UX is.
It's a bit like `git` in that yes, it is a complicated tool, but you can do 99% of what most people need to do with like 5 commands. People make exactly the argument you are making about git.
I have been using PGP a lot longer and still struggle to use it.
When you say, "People make exactly the argument you are making about git," well, those people have a point. I use Git in spite of its usability problems because it gets useful work done. I don't see the upside of pushing through the usability problems of PGP. If I somehow put in the hours to get used to it, then what? What am I accomplishing?
In fact I’d say gpg is even more singularly useful than git. Git has competitors which are superior to it in nearly every way; gpg does not, and likely won’t.
GPG doesn't have competitors because modern security thought rejects the approach and treats it as a relic.
The modern thought is that security can't be an add-on, it needs to be part of the system. So encryption for a chat client and encryption for a web browser, and signatures on a binary are all different problems to be solved differently.
In general, for each major use case of file/message encryption (messaging, backup, transport, package signing, contracts, drive/filesystem encryption, &c), there is a cryptosystem that does the job better than PGP, in part by being built out of modern components and a modern understanding of cryptography†, and in part by being designed around the actual problem and not a lossy abstraction of the problem.
So it's frustrating to see PGP advocates talking about these things PGP does that Signal, or Tarsnap, or Minisign, or Certificate Transparency, or whatever don't do on their own. It's like telling a line cook that a Swiss Army Knife is a meaningful competitor to a chef's knife. Like: it doesn't matter, nobody's going to use a Swiss Army Knife on the line, it's not a threat or anything, but it's so obviously wrong.
† People tend to forget how underdeveloped our understanding of cryptography engineering was when PGP/GPG first came about; for a crash course, check out the IETF archives of Rogaway pleading with the IPSEC people to avoid chained CBC IVs.
I don’t understand how having multiple tools, each one needing audits and effort and time to make secure, each one needing porting to all the platforms that people use, each one breaking compatibility with legacy platforms and hardware, is better than an existing tool without any of these drawbacks.
Because the existing tool does a bad job at all of them, and those tools do reasonably good jobs at each of their respective tasks.
Fundamentally the things you want from a secure crypto system differ depending on the context in which you're applying it. Email needs different things than package signing does than file encryption does. It's silly to pretend that the same tool can provide a good and secure experience for all of them.
I don’t think it’s that silly, and I don’t think email is that different from packages is that different from files. They’re all data with some attributes, and pgp handles them nicely, if you can write the interface for it.
Take forward secrecy. This is a property that means that if your key is compromised, it doesn't allow breaking everything you've done with it. This is a very desirable property for email or messaging because you don't want somebody reading all your conversations if somebody ever gets their hands on your key.
Or take non-repudiation. This means that when you sign a message you say "John Smith did this", and John Smith can't deny having done it.
For email or IM we want forward secrecy. You don't want all your conversations to become readable if anyone ever gets their hands on your keys. If you're caught, you don't want the authorities to be read your entire conversation about obtaining drugs, you want a system such that even if they get your phone, they still can't decrypt your captured encrypted messages. GPG isn't capable of this.
For email or IM we don't want non-repudiation a lot of the time. If you're caught, you don't want every message being "Signed, John Smith". GPG provides non-repudiation, which is undesirable.
On the other hand, we don't want this for package signing. We want to know that John Smith released this thing. We want to hold John Smith to account if he signs a malicious package, that's the very point of John Smith signing it.
I don't want forward secrecy for my email. I actually want to keep my archived emails for an indefinite period. PGP allows me to do this in relative safety, the email is encrypted once at creation time an then stays encrypted. In general, forward secrecy does not work for the case where someone keeps the message. If the attacker can own a user well enough to get their secret key material, they will also get any messages that user still has access to.
Things might be different for a relatively insecure medium like IM but the same principle applies. Most IM users keep their old messages around thus negating the value of forward secrecy.
>For email or IM we don't want non-repudiation a lot of the time.
We might not, but the people we send our messages to will want to prove we sent it. Otherwise you open yourself to harassment and general abuse. If you don't want to sign your emails then you don't have to. Deniability through claimed forgery doesn't really work anyway:
The lack of forward secrecy in PGP means that you can't protect your secrets long term; having forward secrecy, meanwhile, wouldn't prevent you from machine arrangements to securely archive. This is so straightforwardly obvious that it's hard to believe this response is made in good faith.
> They’re all data with some attributes, and pgp handles them nicely, if you can write the interface for it.
This apparently almost wilful ignorance of the context of the "data with some attributes" is (specifically, how it's used), arguably, part of the problem.
It's been how many years, and still no-ones' written a good interface?
Nobody has written a good interface because PGP gets the primitives wrong. Most notably, the notion of long-term identifying keys you encrypt to directly as the most important service model.
Because security is a serious problem that needs to be treated seriously for it to be useful. You can't just sprinkle it on an existing system like fairy dust, it has to be built into the design to be effective, and it needs to be periodically re-thought mercilessly.
PGP for instance encourages some terrible habits, like awfully long lived keys.
You have somebody signing packages with an ancient DSA-1024 key, perhaps because "SQUEE Philip Zimmermann himself signed that key back in 1996 and I can't part with it". That's not the right attitude to security.
Good security requires a solution fit to whatever specific thing you need to secure, a system designed with it in mind, and a lack of sentimentality and backwards compatibility.
It is not versatile - multi-purpose - it is weak - general-purpose. "Encrypt/sign these bytes with this cipher, these parameters, and this key" is the least common denominator of cryptographic ability and ignores all the work to build an actual useful cryptographic system around it.
On top of this, its specific interfaces for doing that suck, so you can't even bury it as a safe "primitive" in whatever system you're trying to build (even assuming you wanted to bring in the attack surface of a 25 year old C project).
Something like libsodium is actually versatile in that it offers primitives to build a broad set of safe tools on top of. It does more by doing less.
I would consider verifying a signature with gpg, openssl or openssh a far more important feature for code signing than adding (equivalences of) ~transport features like OLM's ratcheting or perfect forward secrecy.
If the only way to verify a package is to run code written by the same person who wrote the signing that's pretty bad.
But PGP sucks horribly for that! It's not made for it. Believe me, I tried.
If you read the old PGP manuals, it talks about something like a woman secretly communicating with her lover. That's the use case, not verifying random packages.
0. GPG trusts your key absolutely.
1. GPG trusts the keys you've signed.
2. You can verify stuff people you trust trust.
After that, you're stuck. So if you want to verify the signature on say, the Tor Browser then you either need to know the actual owner of the key (which just says "Tor Browser Developers"), or one of the people that signed it.
I just tried, there's 100242 signatures on key 4E2C6E8793298290 for some reason. Probably an attack.
So, what then? Well, GPG sucks here. You have no tools whatsoever to ask GPG "is there a path between me and this key?". There used to be some random guy that ran a web service, but that's gone.
People have used GPG for signing random software, but GPG itself never actually adapted to this use case. The official tooling needed to solve the problem of "I need to make some sort of estimation of whether there's some sort of trust path between me and the signature of a random person I never met" doesn't exist.
I guess you could use AWK to extract those 100K key IDs, download them all in a loop, import every single one (GPG will slow down to a crawl if you do that, it can't handle large keyrings), and hope that at least one has a signature from someone you know of it.
Oh, and you better plan that kind of attempt well, because this will take hours, and GPG and related tooling will choke on such a huge keyring. So you need some sort of plan to import all that junk, find what you need, then get rid of the extra.
Yeah, that's not usable. And I say that as somebody who participated in key signing parties to the point on having a key signed by a couple hundred people.
Relying on web of trust is in contrast to having nothing. Other groups (Debian, Apache, etc) describe their hierarchy of trust inside or outside of the key servers, so I rarely care how messed up key server contents are.
AFAIK language specific package managers fundamentally have a trust problem. If they cared enough to make a protocol they might care enough to fix the actual trust problem, but as it is, it is better that we can reuse tools and web of trust rather than download a tor browser and ask it to verify the next download of a tor browser..
On the one hand, I sort of agree with you, I think the disadvantages of current PGP tools are overstated. But I think there are some plausible alternatives depending on your use case. They just aren't bundled into a single tool.
For file encryption, there's age. This uses all modern crypto and does authenticated encryption.
For signing, there's signify. I wonder if age and signify can be made to use the same keys...?
For live two-party encryption, there are protocols appropriate to the use case. Axolotl for chat, Wormhole for file transfers.
For PKI, there's... whatever people actually do instead of PGP keyservers and WoT, because they've never been a reliable way to find out someone's public key. In practice, you'd either download it from someone's website, get it off a keyring, receive it through some other secure channel, or be given it in person.
It's ironic that people make the argument "the reason no one uses PGP is that it doesn't have network effects, and the reason it can't build a network is that it sucks", because the thing keeping me on PGP is its network effects. In the circles I run in, you can count on people having PGP keys and knowing how to use them for encryption and signing. They've been used in Linux packaging for years, for example. I can't count on someone using age, at least not yet.
Is it the default? That's all that matters. When GPG defaults to AEAD ciphers, you can say that the ecosystem really supports AEAD (GPG is the de facto reference implementation). If it's not the default, that's purely because too much of the installed base doesn't do AEAD --- there's no other reason to default to PGP's pitiful MDC construction.
> Yet it is the most stable, the most open, the most proven, and the most supported PKI technology, supporting both encryption and signatures.
You're completely ignoring the CA system used for TLS. In the niche segment of encrypted/signed email, S/MIME sees an order of magnitude or two more use than PGP. Even DNSSEC, for all its issues, is probably more used than PGP.
The other really obvious comparison is Signal, which secures more messages in a week than PGP/GPG has in its entire lifespan.
I hate to have to point this out, but in the spirit of technical correctness: as bad as PGP is, it would still be a pretty big deal if it was totally broken (like, at the core, rather than its email use case, which Efail did catastrophically break). People would have to do things in response. But I maintain that if DNSSEC had a comparable failure, nobody would even need to be paged; literally nothing important meaningfully relies on it. It's almost perfectly performative.
CA is centralized and suffers from all the drawbacks of being such, including but not limited to censorship, lack of privacy, and centralization itself.
I don’t understand how it can be even seen as an alternative to something like pgp
It's a PKI system, and by far the most widely used PKI system at that. Calling PGP "the most proven, and most supported PKI technology" is just simply not factual. You may not like it, but that is no reason to disregard understanding why it is so successful and something like PGP languishes with basically no usage in comparison.
(As a rough estimate, there's something like 10,000-100,000 PGP users.)
The size of the keyrings of global PGP servers is knowable. I don't have a link to the analysis from which I draw the numbers, but the summary is about 100k PGP keys. Take into account unusable keys and duplicates, and some number in the (probably high) tens of thousands is a reasonable estimate for PGP users.
In terms of anecdata, the only people I've known to have used PGP where the ones working on supporting it in the email client; I've seen more evidence of S/MIME email than PGP, and even that is extremely thin on the ground.
The pgpkeydump page links to... six pages about GPG in the "Much has been written about this" part.
GnuPG is one of the cases where "rewrite it in rust" helps a little. rPGP and sequoia-openpgp offer somewhat reasonable library interfaces. pgpkeydump depends on sequoia.
Thanks for posting that link. I did not realize that the GPG maintainers were so terrible. This strongly reaffirms my choice to move away from GPG where possible!
Sequoia is an implementation of OpenPGP, i.e. RFC 4880. It also supports a few non-standard extensions to OpenPGP, e.g. ECDSA and EdDSA signatures, better stream ciphers than CAST5, etc.
I think a legitimate selling point of Sequoia is simplicity: it's a new implementation, written in a memory-safe language[1], and it can make better (but still not optimal) default decisions.
At the same time, I don't think Sequoia succeeds in avoiding PGP's baggage or drawbacks: PGP is defined by its baggage, and any greenfield implementation that avoids that baggage is intentionally breaking compatibility with the thing it's meant to be compatible with.
It's unclear to me (personally) what the value proposition for Sequoia is meant to be, given that it's shackled to the past and can't do better on basic things like certificate malleability. If the goal is for it to be something better, I think it'd be best to cut the losses and start from stronger (and more modern) foundations.
[1]: Except that they've chosen to use Nettle for the cryptographic components, meaning that there's still a good chunk of C under the hood.
> PGP is defined by its baggage, and any greenfield implementation that avoids that baggage is intentionally breaking compatibility with the thing it's meant to be compatible with.
Is there a way in PGP to say "I won't talk to you if you have a weak key, or use ciphers that are not one of <some reasonable list of modern ciphers>"? Like you have in TLS, SSH, and IKE handshakes?
Not and be compatible with the OpenPGP spec. You can define a safe subset of OpenPGP with some additional restrictions (like no weak keys) but that won't be PGP any more, it'll be a new protocol.
Technically true and it would certainly avoid a bunch of confusion if the new subset carries a new name. However there are a bunch of "OpenPGP" implementations out there but when you start comparing them, it's obvious that they differ quite a bit.
Warnings are unfortunately worthless without enforcement, much less adequate presentation. GPG doesn't even bother to prominently warn the user when a key is expired: they tuck the message below the "verification succeeded" message, which is the exact opposite of what they should be presenting.
I think you misunderstand how PGP works. There's no real-time interaction between participants. Participants don't have certificates, only keys. The ciphers and signing algorithms are determined by the OpenPGP spec rather than some negotiation between parties.
The trust model in PGP isn't a certificate with a signing chain but a "web of trust". I know Alice and Alive knows Bob. Alice vouches for Bob by signing his key and giving it to me. Since I trust Alice personally I have some level of trust in Bob.
Yes, but the issue of a key being weak (using a compromised cipher, having too little entropy, etc) is orthogonal to how the actual communication works. If your key is weak for any reason, the encryption and signatures of your messages may be worthless, and that's what warnings should be about. Also, on incoming messages, if they have the same problem, there could be at least a warning about the key not being secure enough/cipher being old or compromised. I'll decrypt the message, but beware. That kind of warning.
The difficult part, of course, is that since there's no official approved way to have a remote/online keysigning party, communicating newly-generated keys to others in a trustworthy and secure way may surely prove to be a problem.
PGP (by which I mean GPG's OpenPGP implementation) will show you the type and size of a public key. It can't tell you that they have a shitty password or have their private key password printed on their t-shirt. When you receive a message the session key was encrypted with your public key so it's only as good as your key.
You can check out an encrypted message to see what symmetric cipher and digest was used. While GPG could throw up a warning there's nothing you can do about a weak symmetric key or digest. There's no return channel to the sender and no handshake.
In the case of signed packages the issues of poor digests is more the fault of the repository. They are the ones that need to enforce the digests and asymmetric key types they support. They could easily reject MD5 signatures and small public keys.
I'm given to understand that OpenPGP as such is not vile per se; it is a standard on which you can build a secure and flexible system that can evolve as ciphers evolve, etc. It's GnuPG that gave it all bad fame.
It's also where I tend to agree with he-who-shouldn't-be-named-on-hn-lest-a-shitfest-begins: "never ever roll your own cryptography" stigmatized the field such that anyone who would like to actually try to do something is first discouraged by their peer crabs in the bucket repeating "never ever roll your own cryptography" mantra, and those who get past that are discouraged by the old boys' club of Those Doing Cryptography for Decades who seem to imply that you need to have at least a Nobel Prize in Mathematics to join the club — even though those are the same folks who also gave us a plethora of cutely named OpenSSL vulnerabilities and made GnuPG an unscalable cabinet of morbid curiosities (where the curiosities are all those wonderful workarounds that put GUIs on GPG, integrate it with other systems, etc).
I wish People Doing Cryptography for Decades had enough humility to admit that they don't know everything, and that maybe while they may be algorithm demigods (although all those sweet named vulnerabilities would make some question that status), there also exist folks who do API and UX design better. If only people could talk to one another...
> I'm given to understand that OpenPGP as such is not vile per se; it is a standard on which you can build a secure and flexible system that can evolve as ciphers evolve, etc. It's GnuPG that gave it all bad fame.
No, OpenPGP is pretty bad as well. The RFC contains all kinds of things that wouldn't be allowed anywhere near a modern design, including:
* Encouraging compression before encryption (Ss. 2.3)
* A pointlessly malleable packet and certificate format, one that allows certificate bindings to be stripped off without invalidating the certificate itself (Ss. 5.5.1.1)
* All kinds of dangerous primitives: DSA, MD5 and SHA-1, etc.
* A brittle not-HMAC for message tampering detection (Ss. 5.14)
And so on; those were just the ones from the top of my head. OpenPGP's decisions were understandable in the early-to-mid 1990s, and justifiable (for compatibility purposes) through the early-to-mid 2000s. But that ship has long since sailed.
What's the matter with that? It should be secure with any modern cryptography primitives. (I mean, it's obviously no less secure to do `age -e myfile.7z` than it is to do `age -e myfile`.)
CRIME attack: if you can provide something to the target system, then it will compress differently depending on what's provided. If you guess right, then the compressed version will be smaller.
VOIP statistics: You can make very good guesses about encrypted voice communication by knowing how it compresses speech, figuring out how long different phonemes are, and statistically trying to reconstruct the conversation.
> VOIP statistics: You can make very good guesses about encrypted voice communication by knowing how it compresses speech, figuring out how long different phonemes are, and statistically trying to reconstruct the conversation.
This sounds interesting, do you know of any good resources to start learning about how to do this?
How would you make a CRIME attack with typical use cases for OpenPGP? Most cases I've seen PGP/GPG used, there isn't any opportunity for chosen plaintext.
It's a devilishly complex format with all sorts of footguns and backwards compatibility going back decades, and badly chosen defaults.
Really I think it should be killed too. Right now if you want to talk to everything you'll end up implementing stuff like CAST5, SHA1, compression (seemed like a good idea back in the day, turns out it's not!), and the NSA's backdoored elliptic curve algorithm. This will be a huge undertaking, and a lot of it is frankly crap that shouldn't be used if not actively dangerous.
Or you could stick to modern crypto and follow the latest security practices, and save yourself a whole bunch of work. But then why suffer implementing the packet format? You won't be able to interoperate with software that for some reason in 2023 uses SHA1 anyway.
Is there a thing that has modes of work akin to OpenPGP but without its baggage?
I mean this: decentralised/web of trust (there exist cases when not having to have a CA is a feature), can work in offline mode, uses asymmetric encryption, can use hardware keys, is suitable for email.
> Of the 1067 keys IDs collected through signatures on PyPI, a full 308 (or roughly 29%) had no publicly discoverable key on the major remaining keyservers
Even if they existed on Ubuntu's or OpenPGP's, they'd still be annoying to discover due to the fact that GnuPG only supports a single key server at the same time. You want to have a correspondence with both Protonmail and Mailvelope users? Sucks to be you.
Not to mention the fact that if I wanted to maliciously sign something, I'd upload a wholly forged key to those keyservers and you'd happily discover them. It would be a step forward if we'd have more operators like Protonmail that would certify new keys, at least that way in many cases we'd know a key is actually old, rather than generated recently for an attack.
This is a great observation! There's a whole extended digression (which I cut out of the post) on how even the keyserver's response is insufficient, for exactly this reason. PGP as an ecosystem completely lacks any strong identity guarantees of the type that PyPI wants; without those guarantees, this is all just fragile hashing with more steps.
> signatures that can be correlated, many are generated from weak keys or malformed certificates. The results suggest widespread misuse of GPG and other PGP implementations by Python packagers
So a bunch of python packages don't implement the spec correctly... and this is pgp's fault.
Yes. As the post says: well-designed signing schemes need to be misuse resistant.
(And to be clear: it's not the "spec" they're failing to implement. The breakages here indicate that people aren't generating strong keys or doing signing operations correctly with the `gpg` CLI; poor design and documentation there is absolutely the tool's fault.)
This has been a problem with so many cryptographic systems over the years. For various reasons they prefer to be backwards compatible over being secure. As the best practices shift, the default values of these applications stay the same.
I understand why, to a certain extent. If your defaults change and people are expecting the old values it could break existing scripts and tool chains. For most applications this argument makes sense, but for security focused tools I think this is a mistake. Defaults should always represent best practices, and people who want specific flags should specify them.
That's not something to get hung up on: what matters is the semantics, not whether or not each bit is defined.
Here's a good example, in 3.2:
> Also note that when an MPI is encrypted, the length refers to the
plaintext MPI. It may be ill-formed in its ciphertext.
What does this mean? Is the MPI encrypted separately (how?), or something else? Why would you even mention it being "ill-formed" in its ciphertext (it has to be, unless something has gone very wrong), much less use a wiggle phrase like "may"?
If you dig further through the spec, you find under 5.5.3 that MPIs can be "secret" MPIs, which are apparently encrypted with a passphrase that gets munged through a KDF defined by an S2K identifier. But this is inexplicable under the "secret packet" part of the spec, not the MPI part. And it does nothing to address the second ambiguity.
Here's a place (5.2.3.1) where the spec is unambiguous, but forces clients to be ambiguous:
> If a subpacket is encountered that is
marked critical but is unknown to the evaluating software, the
evaluator SHOULD consider the signature to be in error.
This is the exact opposite of what ecosystems like X.509 do; it's a pointless source (perfectly well-specified!) variance on the client side.
As the post says: PGP has been discouraged on PyPI for years, because using it correctly is difficult. The goal here is not to try and carve out the "good" bits, but to drive home that even the remaining users (who were expected to be experts, given that it's no longer advertised) are continuing to use it insufficiently for its stated goals.
Perhaps a good way forward is to come up with a positive incentives, like already exists with the various badges for CI success, test coverage, documentation, valid HTML5, etc. There could be a service that verifies that PGP key can be fetched and the latest pypi version verified. To avoid putting the burden of hosting on you, perhaps you could provide a python script which takes the package name and allows people to self-validate?
A bunch of utilities which can handle "signing" with some sort of a key isn't a solution. That's not solving the trust problem, which is the actual hard part of GPG.
GPG isn't hard to use. It has quirks, but it's not hard - programmers should be able to figure it out.
Trust management is hard and no one has solved trust management well. It doesn't make one iota of difference how file signing is done, what matters is whether it can be done in a way which makes the scope, degree and path of trust clear in a way the user can action.
Hence my question: because the way we solve "trust" on the internet is "trust a megacorp". The global HTTPS system is based on whichever CAs are shipped in our browsers. And those CAs create global trust - they can issue certificates which say that anything, anywhere is totally who they say they are. Just the Root Certificate Authority process (which isn't bad, but it is just "hey guys, totally trust us").
> GPG isn't hard to use. It has quirks, but it's not hard - programmers should be able to figure it out.
I've been using GPG for roughly a decade, and I don't think I can consistently reproduce the basic commands I need from memory. I've lost track of the number of times I've corrupted my TTY by forgetting `--armor`, much less the number of times GPG has helpfully "guessed what I mean" in the wrong way.
At one point, I had at least 3 different copies of my key bundle on different keyservers. I wouldn't be able to tell you which one is the right one; I can count on a single hand the number of emails I've received encrypted to the right subkey (and on two hands the number of emails encrypted to any key of mine).
> Just the Root Certificate Authority process (which isn't bad, but it is just "hey guys, totally trust us").
This isn't true in a useful sense: the CA/B standards are pretty transparent, and the Web PKI mandates transparency (through things like CT) in a publicly auditable way. You can see (and verify) exactly what every CA is doing in the Web PKI, at all times.
I don't think PGP really solves any of these problems either, not for the vast majority of users outside of a fairly small group of PGP enthusiasts. PGP has a complex model and perhaps that's actually its weakest point – going "back to basics" would probably be a good thing.
In the meanwhile, there's lots of use cases that could benefit from easier and more straight-forward signing. I had simply given up on signing anything in git because I just couldn't get it to work (it worked, and then it didn't, and then I fixed it with some effort, and then it broke again in obscure ways, and then I gave up) until git supported OpenSSH signing, which I've been using since the day it was committed without problems.
To my knowledge, everything in the Python packaging ecosystem that supports PGP signatures, does so by shelling out to the `gpg` command. Presumably gpg should be implementing the spec properly.
I think Maven does PGP fairly well. It's not perfect, but enh.
Sonatype are the community stewards. They make sure you have control over and email address and a packaging namespace. PGP is nice because it _requires_ an email address tied to a cert.
They _require_ signatures for _every_ artifact uploaded.
The maven repository manager makes your release transactional. Either all of your artifacts are signed and the signatures match, or none of the artifacts are published.
Your release is rejected if you fail to pass the test.
After the checks, you are given the option to publish the transaction or not.
There is no undo button, as it should be.
I also reported a bug/security issue to Sonatype and they fixed it within a few weeks.
There is room for improvement:
I think they could get stricter and require the signing keys be listed in the pom or whatever build file you use.
Maven does some signature checking, but it oddly enough doesn't do pgp verification. Given that 100% of signatures are checked on upload, it technically is redundant since nearly everything is TLS these days.
I think Sonatype should publish a transparency keymap. This would probably be an actual good real-world use case for a blockchain too. KeyId -> Artifact Namespace controlled.
> Sonatype are the community stewards. They make sure you have control over and email address and a packaging namespace. PGP is nice because it _requires_ an email address tied to a cert.
This isn't a property anywhere in PGP: a PGP identity can be an email, or it can be any other free-form identifier.
Individual keyservers can gatekeep inclusion based on email verification (which itself is flimsy, but ignoring that for the moment), but there's no strong guarantee that any particular keyserver does that (or will continue to do it, as a matter of policy). Uploading certs for "bill@microsoft.com" was a joke on the SKS servers for years.
trying to guide junior devs in the right direction for python is nearly impossible (don't use setup.py, it's deprecated, ah except that, ah setup.cfg is now too, use setup.toml, ah I'll just check the docs, crap they're incomplete and inconsistent)
you can't call your package that as someone already used that name on the internet and it's a flat namespace, sorry you can't share modules between teams because everyone has their own mini pypi to push to and the tooling is actively hostile against namespacing
and the same is true of the build chain, trying to version the packages, even getting coherent
dependency resolution between machines is a complete disaster
Java sorted out all the packaging mess 20 years ago, meanwhile Python continues to re-invent everything badly, and not just badly, really really badly
meanwhile pypa are faffing about mission statements and making github uploads very slightly more secure
You're not associating keys with an account; you're associating keys with a maven namespace. I guess you do have to have an account to post a artifact into a staging repository, but at the end of the day, the keys have to match the namespace and to quote Stone Cold Steve Austin: And thats the bottom line.
So for instance, I now control this maven namespace:
com.github.exabrial
When I created my Sonatype account, I signed up and verified my email. I then had to give them a PGP public key and told them I wanted to control that namespace. Since the namespace was un-occupied, they were like sure np.
Now, whenever I publish artifacts into that maven namespace, they must be signed with that PGP key.
Debian packages, although they aren't signed but "use apt-secure (also known as secure apt) which is a tool that will allow a system administrator to test the integrity of the packages downloaded" [1].
I'm tempted to give in to the claim that a problem that is fundamentally a liability concern, and therefore needs legal fallback, is unlikely to have a technical solution.
There's little that inspires confidence that what is used is not actually awful, there's plenty that shows how fragile and finicky PGP implementations are, thus how awful they are. Doing things the same way for two decades combined with not taking additional precautions (like transport crypto) puts a lot of weight on a single thing. That deserves all the possible scrutiny it possibly can get.
Why would transport cryptography be in scope for package signing?
It's literally an anti-feature, because any practical environment needs to MITM and inspect the incoming package stream to enforce policy, or provide caches.
The abuse of HTTPS to "verify" content on the internet has been one of the biggest missteps.
> Why would transport cryptography be in scope for package signing?
It's not just package signatures that aren't encrypted. In addition to avoiding any MITM exploiting the implementation that verifies those signatures there are other parts most people don't want to reveal to a random MITM.
> It's literally an anti-feature, because any practical environment needs to MITM and inspect the incoming package stream to enforce policy, or provide caches.
It's not. Most users are not in an environment where this would be a positive aspect, rather than a potential way to deny a service or invade their privacy.
This day and age we have better methods than leaving things just plaintext. Take a look at DNS for example.
> Most users are not in an environment where this would be a positive aspect, rather than a potential way to deny a service or invade their privacy.
I think you are wildly disconnected from "most users" if you think most users care about who knows what Linux packages they're downloading, or are at risk of having that ability removed based on MITM inspection of their HTTP traffic.
I think it's the exact opposite at play here. There are many countries with millions of people, thousands of Debian(-like) users that get MITMed constantly. There are a few organisations where such MITM is useful and someone provides caching. The cons heavily outweigh the risks but you're in a privileged situation.
Linux distributions that use PGP are effectively closed key ecosystems, meaning that their use of PGP is purely incidental: you trust the key material because they come from trusted sources or baked into your distribution, not because of any of the behavior that PGP supplies. PGP is strictly a liability in these contexts: it's more malleable and allows third-party signers to make mistakes that a reasonable codesigning scheme does not.
In other words: each of these distributions could switch to bare signing keys tomorrow, and nothing would change (other than being able to delete a massive pile of insecure GPG code).
I don't think that this complaint is completely meritless but I don't buy most of the arguments.
- keys not in a public keyserver?
Does not mean that the signature is useless.
- half of all keys used to sign on PyPI since 2020 are already expired.
In theory you can't easily sign with an expired key, so I guess what he means is that he found packages using expired keys, but not that the keys were expired at the time the package was signed/pushed.
Since them it is normal and does not mean that it will not verify once you take the signature date into account.
I guess that it is heaven more common if you follow the good practice of a dedicated short life subkey of 2 years.
In my own personal experience, I already got reports of an user of my package asking if everything was ok because I accidentally signed a release with a wrong key.
This has been discussed throughly in this and the previous thread. The gist is that without the public key being discoverable, it's not much more than a hash. Which makes it quite useless.
> but not that the keys were expired at the time the package was signed/pushed.
We can't know that with PGP. There's no validity stapling showing that a key was valid (or invalid) during the time of signing. All we have is that at the time of verifying the signature it was invalid, thus it's generally useless/invalid.
This is further worsened by the fact that revocations (or extensions) are hard to have an up-to-date eye on, so you might be verifying a signature, finding it valid but they key had been already revoked. Yikes.
I think the problem is always the same: there is a difference between verfying a library and verifying its authorship. PGP is made strictly to verify authors not artifacts.
If I have people I trust, I want to check whether they trust the library I'm about to download. In PGP like most web-of-trust solutions, what is checked is that they trust that the name of the author is correct. That does nothing for security. This is the same for sigstore, etc.
Other solutions like crev do the right thing, allowing people to assign trust into library releases, not just author's identities.
In PGP terms, it means I don't want PyPI to have PGP signatures from their author, so I can verify their author via my web-of-trust. I want PyPI to have PGP-signed reviews from people I trust, so I can trust the reviews.
Python release files have been signed with sigstore for the last couple versions. You can peruse the release tooling that uses it at https://github.com/python/release-tools
IME, many people want a very simple "sign this file" interface with a few complicating bells and whistles tacked on: certificate chains (maybe authorities, maybe TOFU/HPKP-style pinning), weak cipher rejection, and crypto hardware support.
That's not that hard in the scheme of things but it's definitely not trivial either, especially supporting weird HSMs.
Until something well supported, modern, and easy to use can do the above uncontroversially for 5-10 years gpg is going to see lots of misuse.
I'm surprised by the number of people who say that gpg is insecure. Is it disliked because it is decentralized and everything has to be owned and tracked by the four MAGA (formerly FAANG) corporations?
The “usability nightmare” isn’t PGP/GPG fault, it’s PKI. PKI is hard to use because of its decentralized and free nature.
The “fix” is to channel everything through google, or a government, or some third party that makes it easy and usable, but removes the main benefits of security, transparency, and independence.
I think the reason GPG is all we have and has been for 20+ years is because that’s as good as it gets and better than nothing.
It lets people who know how to use it communicate securely.
So you can document your way out of an impossible to communicate securely nightmare.
Though there is nothing inherently free or decentralized about "PKI", and given your conflation of those concepts I suspect that you're not actually aware of where the lines are drawn from.
Certainly the decentralized nature of PGP adds some challenges to good usability.
However, a large number of the problems come from the PGP spec itself (some of which is defensible in a 20+ year old spec, but not in anything in a modern system) and from GPG's poor implementation.
I mean a PKI that I want to use. I only want to use a free PKI so everyone has access to it. And I only want decentralized as that seems sustainable to me without focusing power in a central entity.
I use private, centralized, rather expensive PKI every day to log in for professional work. I wouldn’t want everyone to use that. And I wouldn’t trust my work for private secrets.
I am pretty interested in this topic and would love for something better. But I’ve also used GPG for a few decades now and am able to communicate with people I need to. It’s certainly hard to use, but it’s pretty good for privacy purposes.
> Is it disliked because it is decentralized and everything has to be owned and tracked by the four MAGA (formerly FAANG) corporations?
It's shit to use and instead of improvements we see deflections like this. Sure, there are better implementations being worked on now, but where were they in the past 10, 20 years?
Indeed. I stopped using gpg for my packages, and started with minisign.
Unfortunately, PyPi doesn't support any kind of metadata file, like a signature file, so I can't upload them :(
Unless maybe u name it like a pgp file, but them u are hurting UX. Anyway, I simple upload them to my releases anyway, in Gitlab or Github.
I disagree. As far as technicality goes, PGP is a pretty strong decentralized model. GnuPG is a pretty solid implementation of PGP.
Are they perfect? No. Do they merit this much criticism? Obviously not. This whole thing reads like an opinion piece criticizing implementation details and attributing it to the whole design. I did a spot check on the guy's authoritative sources and found them _extremely biased_, with one referring to an article disparaging DSA, who prefixes his technicals with "more important articles" arguing that white people somehow cannot experience racism even if they experienced racial prejudice. And they expect to be taken seriously? It's insanity.
But back to the technical aspect - _Of Course_ you should not use 1024-bi RSA or DSA keys. Of course you should choose prime numbers. If someone chooses to use a non-prime field you're going to have problems - that's not a problem with the PGP protocol or with RSA or DSA. If you're trusting public keys from complete strangers, _you're doing it wrong_. If you're not verifying that the fingerprint matches, _you're doing it wrong_.
For a journalist trying to report his findings in an unfriendly country to a secured third party, PGP is an excellent choice. For a client to secure communications with his lawyer, PGP is an excellent choice. And for anyone who would argue the contrary, I challenge them to forge a signature for the following identity. Go on, I dare you.
Only I own this private key, and only I can sign with it. It's stored offline, secured and airgapped. It'd take more energy than there is on the planet to crack it. But I can easily prove I'm the owner. Trapdoor functions are a wonderful thing. If I were given your public key, and it _followed protocol_ and was a strong key, I'd be able secure communications to you in such a way _even I wouldn't be able to decrypt them_.
----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
Go on, forge something. I'm waiting.
-----BEGIN PGP SIGNATURE-----
Why is that any more valid than any other key that says "Anonymous"? When you "prove" you own it, how would I know there's not MITM happening over the internet?
Its use is securing communications over an insecure channel. I
can guarantee you that this message has not been tampered with
in any way by HackerNews admins if the public key validates
the signature of this message. If they have tried to tampered
with it, the fact that they don't have the public key would
demonstrate an invalid signature. So not only would you know
if this message is really from me, you'd also know if it was
tampered with.
The guarantee is worthless, HN could generate a new key with the same "Anonymous" name on it, and use it to sign whatever they wanted to sign, and modify your comment to post that.
I have no way of finding out whether this already happened or not.
Signing messages with a new key untrusted by anyone is worthless security-wise. In general posting GPG signed messages in forums is of dubious utility, unless you're a celebrity of some sort and have a key with a decent amount of signatures on it.
Even then, GPG makes path finding extremely inconvenient, so even though I'm very well connected on the PGP WoT, it'd take me a serious amount of work to verify a signature unless it belongs to one of the few hundred people whose keys I've actually signed (yup, I used to be quite serious about this).
>The guarantee is worthless, HN could generate a new key with the same "Anonymous" name
Actually, it's not. That new key is an entirely different identity, and if you're struggling making the distinction between its "name" and its "identity", I'd argue you're not the target audience for PGP in the first place. There are 45,000 John Smiths in the world. Are you going to also argue that the concept of identity is thwarted by this fact? No. I believe this argument is made in bad faith.
>and use it to sign whatever they wanted to sign, and modify your comment to post that. I have no way of finding out whether this already happened or not.
Have you tried checking the signature? (hint: it's invalid - HackerNews manipulates the whitespace). Verifying signatures is incredibly easy in GnuPG. Would you like me to walk you through it?
>In general posting GPG signed messages in forums is of dubious utility, unless you're a celebrity of some sort and have a key with a decent amount of signatures on it.
On the contrary, it's quite useful. Public key cryptopgraphy allows you to be absolutely certain that a messages with a valid signature were signed by the holder of the private key. You don't need to know who that person is, but you can be certain that he holds the corollary private key. Your argument seems to be conflating _that_ with an argument on the metadata e.g. the authenticity of the public key itself, and that's an entirely different discussion orthogonal to the _utility_ of PGP.
By the way, GPG is just an implementation of the OpenPGP protocol. There's more than one implementation (RNP, sequoia, and OpenPGP.js, to name some for instance).
>Even then, GPG makes path finding extremely inconvenient, so even though I'm very well connected on the PGP WoT, it'd take me a serious amount of work to verify a signature
So? I never said it was easy. At the end of the day, you'd have to trust someone. PGP allows you to trust your web of friends. TLS requires you to trust some certificate authority. You're not really making any case against PGP here. Moreover, it's net even close to the use-case I argued.
For a journalist trying to report his findings in an unfriendly country, PGP is an excellent choice. For a client to secure communications with his lawyer, PGP is an excellent choice. And I'm still waiting for you to forge that signature to prove me wrong.
> Actually, it's not. That new key is an entirely different identity
Yes, which is just as worthless as the one you used, and from my point of view neither is better than the other.
> Have you tried checking the signature?
No, there's no point. It is worthless whether it verifies or not.
> On the contrary, it's quite useful. Public key cryptopgraphy allows you to be absolutely certain that a messages with a valid signature were signed by the holder of the private key.
Yes, and anyone can make a key, and they're all equally worthless unless there's a way for me to develop trust into one of them. And in this situation, there's none.
> So? I never said it was easy. At the end of the day, you'd have to trust someone.
In the situation you're providing here, there's no way for me to trust anyone, so your signature might as well not be there.
>Yes, which is just as worthless as the one you used
Wrong. One produces a valid signature, the other does not. I couldn't care less for your point of view - my interest align with my clients.
>No, there's no point. It is worthless whether it verifies or not.
An invalid signature indicates the message has been tampered with.
>they're all equally worthless unless there's a way for me to develop trust into one of them
Have you tried engaging with the parent post instead of talking past it? Attributing trust to some key an entirely orthogonal issue to the utility of a PKI protocol.
> In the situation you're providing here, there's no way for me to trust anyone, so your signature might as well not be there.
Sigh. First off, it tells you two things. (1) The message was signed by the holder of the private key if the signature is valid, and (2) the message has been tampered with if the signature is invalid. And you don't even need to trust the key to deduce these facts. Second, you still haven't forged that signature, so the fact that you're arguing past me instead of posting a valid & forged signature only proves my point. You can't do it.
> Wrong. One produces a valid signature, the other does not.
Wrong. They obviously wouldn't be stupid enough to just modify the message. They'd create their own key, and create a valid signature with that.
Then as an end-user, what it looks like to me is that you signed one message with key BDEC7256 and another with key 1E81885. No way for me to tell which one of those is the actual you, because I don't know who you are.
> An invalid signature indicates the message has been tampered with.
Which is why they'll make a valid signature with their own key.
If they're smart about it, they'll take your text, feed it to their key and silently rewrite your comment. I won't ever get to see your intended signature, only theirs.
And to make it extra-devious, they could arrange so that you see your original submitted version, but I see the fake one.
You kind of do know who they are, the first message was signed by the the first key, so you know that key is associated with that message and the author who sent that message. You don't know their name, but you know that this author, whatever their name may be, has control of that key pair.
As an end user, you see that the first message and second message were signed with different key pairs. Since you know that the first key is controlled by the author of the first message, you use a little bit of logic to deduce that the second message must not be authored by them, because it was not signed by the key pair that signed the first message.
So PGP did do it's job here IMO. The end user sees that there are two different authors here, which is what it was meant to do!
>No way for me to tell which one of those is the actual you, because I don't know who you are.
You don't really need to know this oftentimes, you just need to know that author A is in control of key pair A, and any messages that aren't signed with key pair A must not be authored by author A.
An example of this is trust on first use. You find some python library that colors terminal output, and it happens to be signed with key pair A. Going forward if you check the signature of the release artifacts, you know that it was authored by the same person that authored the original library you used, even if you don't actually know who they are.
Another example of this is TLS certificates. When creating a TLS certificate for your domain name, there is no cryptographic guarantee that the person who paid for the domain name is in control of the machine that is used to create the certificates associated with that domain name.
The only guarantee that is made, is that the person in control of the machine used to create the certs is the only person who has access to the private key material. So in practice this only proves that the machine you connect to when going to $url is controlled by the person who controlled the machine used to create the certs.
We can imagine person A buying $domain. They setup their DNS records to point to an IP address. If the DNS servers used by lets encrypt (for example) lie about the record, it is possible that an untrusted party is able to create a certificate for person A's $domain.
For most forum posts where people aren't using real identities all I generally care about is that post X and post Y came from the same person. By including both the signature and the public key I can check that without having to look at anything other than X and Y. (That's assuming that the forum itself is not monkeying with posts).
Some Debian package maintainers do check them when downloading packages from PyPI. For example, in the .dsc archive for https://packages.debian.org/bullseye/limnoria, debian/upstream/signing-key.asc contains the key used to sign releases of https://pypi.org/project/limnoria/