Hacker News new | past | comments | ask | show | jobs | submit login
Sequoia-PGP – A new OpenPGP implementation in Rust (sequoia-pgp.org)
305 points by 1wilkens on Aug 2, 2018 | hide | past | favorite | 95 comments

I wish gpg had a decent cli/"api"

At work, we use it to store shared secrets. We can encrypt a file that can be decrypted by multiple keys.

It's a bit hard to remember all the commands so I made a webUI to manage everything.

The feature I like the most is that I can list which files you can access and which files you cannot.

The thing is, to do that, I need to pass tons of obscure and magic nonsense parameters IE:

    gpg --list-only --no-default-keyring --secret-keyring /dev/null ops.gpg
and then parse the completely un-parsable output to know which keys can decrypt the file.

So far so good. The problem is that; this magic trick only work with gpg 2.0.30 or lower. If you have the latest version, you can see every keys that can decrypt a file... except yours. There is no way to know if "You" can decrypt a file anymore ! (how great is this)

I now have to tell people that if they want to use the nice UI they cannot have the latest version of gpg, which is troubling.

I can't believe that in 27 years, it is still impossible to know which keys can decrypt a file. Or even just have some parsable output instead of the pile of crap the gpg tool can vomit out.

So I'm pretty excited about Sequoia :)

Have you had a look at pass¹? It's a bash script that uses gnupg to encrypt secrets as gpg-encrypted plain text files. It has a very nice feature that allows you to specify a list of keys for all secrets in a specific directory (using a .gpg-id file).

Once set up, accessing the secrets is a matter of using the pass command line tool:

    pass edit some/secret
    # Copy the first line of a secret; by convention 
    # this is meant for a password:
    pass -c some/secret
    # Show the whole secret file:
    pass some/secret

    # Combined with git:
    pass git pull
    # Generate a 24-character random password:
    pass generate some/othersecret 24
    pass git push
We use pass to maintain a set of shared secrets with a small team. The (encrypted) files are pushed to private git repository (pass supports this out of the box).

1: https://www.passwordstore.org/

I wrote pass, originally just as just a dumb bash script that I was using privately, but then I put it on the Internet, and so all of the sudden there was a requirement to _not be awful_. The experience has been pretty frustrating, for precisely the reasons pointed out by GP: the gpg command line interface is atrocious. I'm required to parse things in a million crazy ways, buffer data myself, and work around all sorts of weird behaviors. All of that headache, and then in the end all I get is lame PGP crypto out of it? As I said, frustrating.

On the flip-side, at least we've (partially?) succeeded in taming the beast, and the end result is something moderately usable that you happily recommend to folks on HN. So that's good I suppose. :)

An alternative is to use the GPGME library, which does all the ugly gpg output parsing for you. I realize, though, that a C library is not a solution for everything, especially not for a shell script that you want to keep a shell script. :)

Well... There's always https://github.com/taviso/ctypes.sh ...

Holy crap, that's so cool. :) Thanks!

I've been using pass for a year or two now. It's a great tool. Thank you for making it.

I'll also chime in to say that `pass` is a fantastic tool, and I'm so glad I switched to it as my password manager. So the effort you went through to get there (which I can sympathize with since I've had to use the OpenPGP CLI directly plenty of times) is very much appreciated.

Pass is amazing, thank you!

Out of interest: How big of a threat is keeping diff files/versioned encrypted containers around when it comes to preventing breaking the encryption? I could imagine that the additional information will reduce the security of the information.

The only problem I can foresee (assuming that the encryption scheme itself has no weaknesses to things like known plaintext attacks) is that it makes it harder to retire an old, potentially compromised key. You need to expunge the git history and any copies.

Makes sense. Thank you.

can you elaborate or point me to a resource how to use it to share secret for projects? i tried the pass, but it sotres the credentials in my home, i would rather prefer to have them in a file in the project, moreover, can also it encrypt .env files or some sort of it?

By default it does store passwords in ~/.password-store but you can override that with environment variables (see PASSWORD_STORE_DIR in the man page). I personally use thin wrapper scripts to change pass's behavior to suit my need. You can even fork it directly (and cautiously) if you want, it's just a relatively straightforward shell script after all.

>can also it encrypt .env files or some sort of it?

What are .env files? You mean the config dotfiles in your home directory? If so you'll probably have to use something like EncFS to encrypt these files. Personally I don't encrypt them but I also avoid storing cleartext passwords in them as much as possible, many unix programs support getting passwords from an application, for example in my muttrc I have:

    set imap_pass = `pass mail/myemail`

I found this brilliant way to manage your dotfiles in an old hn comment. https://news.ycombinator.com/item?id=11071754


I use:

    git init --bare $HOME/.myconf
    alias config='/usr/bin/git --git-dir=$HOME/.myconf/ --work-tree=$HOME'
    config config status.showUntrackedFiles no
where my ~/.myconf directory is a git bare repository. Then any file within the home folder can be versioned with normal commands like:

    config status
    config add .vimrc
    config commit -m "Add vimrc"
    config add .config/redshift.conf
    config commit -m "Add redshift config"
    config push
And so one…

No extra tooling, no symlinks, files are tracked on a version control system, you can use different branches for different computers, you can replicate you configuration easily on new installation.


No extra tooling, no symlinks, files are tracked on a version control system, you can use different branches for different computers, you can replicate you configuration easily on new installation.

But synchronizing shared configuration is clunky (you have to cherry pick commits between branches I guess).

I use NixOS and Nix on my MacBook, which allows you to store and version your whole system configuration. I have factored out different parts of my configuration (emacs, zsh, etc.) in different .nix files. So, I just have one file per machine where I import the relevant configurations and specify the packages that I want to have available. E.g. this is my user configuration on NixOS:


and macOS:


Thanks for sharing those, I had kind of written off Nix for personal laptop after first glance, going to play around again.

This is brilliant! I think I'm about to go and replace some make-based infrastructure as a result.

the software requires configurations. those are read from a .env file where there's name=value, i was wondering if this tool can help in enc/dec that, so on so for i've a gpg --symmmetric script to do that, but you have to know the password

For project-specific secrets, you may want to look at git-crypt:


> then parse the completely un-parsable output to know which keys can decrypt the file.

This reminds me so much about a tip in Effective Java .. 'Always provide an option for users to access every relevant part of your object. If not people will start to parse your toString output and you will have created an inofficial API that you have to support whether you want or not' (paraphrased from my faulty memory). That programmers make that mistake again and again is just sad. :(

Hyrum's Law

With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody.


I'm in a team where all the consumers are only other internal teams and yet this still happens. I've found that it doesn't matter if you explicitly state to not rely on a particular behavior in your documentation, clients still will. It's always your fault if you break it because "it was your change that broke this client" and "it was working fine yesterday".

Some people may depend on them, but the difference between API guarantees and implementation details is that developers are far less reluctant to break your legs... I mean your application if you depend on the latter.

For example in java the iteration order of hash maps or the behavior of sorts in presence of non-reflexive comparators changed and people did depend on that. Sun was able to change it because it was not part of the API contract.

Man, depending on sort order with a non-reflexive comparator is a TERRIBLE idea. Almost on the level of https://xkcd.com/1172/

GPG is largely just this toString output parsing.

Which has been the source of a number of bugs in various GPG clients in the past (a few even on the HN frontpage for a few moments) and probably will into the future until GPG is no longer used.

tbh, GPG should just provide an RPC option so applications can securely pass data back and forth and receive proper error messages and codes. But I bet this won't happen because GNU fears that sort of things considering they won't split up the GCC compiler.

Well there is GPGME: https://gnupg.org/software/gpgme/index.html

But yes I mostly parse colon separated text.

> But I bet this won't happen because GNU fears that sort of things considering they won't split up the GCC compiler.

That's unfounded, there are plenty of GNU projects that have an API.

It's not unfounded. The GCC compiler isn't splitting up because they fear someone would build a proprietary compiler using either backend or frontend API if they did it.

I can see that the same reasoning here would be valid; someone could write a proprietary GPG frontend.

You have one example of a project not providing an API, however there are tons of other GNU projects which do.

Furthermore the GCC decision is well documented in mailing list posts, have you ever seen anyone involved in GPG development claim that they won't allow a library/frontend split for fear of someone writing a proprietary frontend?

This is what I love about Python. Every method is public, but methods that start with an underscore mean "use at your own risk". It suggests that the programmer use a preferred way, without strictly requiring it. Private methods treat programmers like infants.

IMO there's a difference between "use at your own risk" and "this is actually not meant to be called from the outside and if it is it will violate some invariant in the code". Those are two different concepts and it makes sense to distinguish them. Of course in high level highly managed languages like python it makes sense that the difference between the two can be rather fuzzy at times but in low level code there's often a clear difference.

Take for instance a socket class in C++, you might have a private method that deals with the low level details of the libc's socket calls. It's called at construction time and never later, and calling it would cause the current socket to be replaced by the new one (leaking the fd in the process) because it's only meant to be used at init time. Clearly it's not "use at your own risk", it's "code that calls this from the outside is fundamentally broken". Having the compiler enforce this invariant is a useful feature.

Meanwhile you can also have "use at your own risk" methods, for instance "get_raw_fd" if you want to be able to access the underlying socket. It makes it easy to break things but has legitimate use cases.

Of course you could say that you could just tag these private in a certain way and let coder discipline do the rest, but then again you could say that of pretty much all static validation (which, I suppose, makes sense if you like very dynamic languages like Python).

> IMO there's a difference between "use at your own risk" and "this is actually not meant to be called from the outside and if it is it will violate some invariant in the code". Those are two different concepts and it makes sense to distinguish them.

That's what double underscores are for:

  $ python3
  Python 3.5.2 (default, Nov 23 2017, 16:37:01) 
  [GCC 5.4.0 20160609] on linux
  Type "help", "copyright", "credits" or "license" for more information.
  >>> class Foo:
  ...     def foo(self):
  ...         print("I'm a public method.")
  ...     def _foo(self):
  ...         print("I'm a private method.")
  ...     def __foo(self):
  ...         print("I'm so private that you have to really know what you're doing to even call me.")
  >>> foo = Foo()
  >>> foo.foo()
  I'm a public method.
  >>> foo._foo()
  I'm a private method.
  >>> foo.__foo()
  Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
  AttributeError: 'Foo' object has no attribute '__foo'
  >>> foo._Foo__foo()
  I'm so private that you have to really know what you're doing to even call me.
Now in theory it's not quite what you're talking about because it does rely on coder discipline to some extent, but in practice I've never seen it become an issue (although arguably that could be because not many people know that you can access double underscore variable from outside a class).

Where did you get that idea from? Double underscores have to do with name mangling, nothing else https://stackoverflow.com/questions/1301346/what-is-the-mean...

It's just how I've always seen it used.

IME if you allow someone to use your private API you can be sure they will do at some point. And then they will be unhappy when you change it. Sure, you can go with "I said it was private, your problem", but that is the road to zero users.

So much this. I've seen it happen time and time again.

At least in Java, the reflection API serves as a "break glass" barrier to make sure folks understand they're doing something they shouldn't. Someone will still do it, and they'll still blame you when their app breaks later... but I like to believe it at least scares some folks away.

Honestly, I don't see people getting unhappy at breaking private API functionality. Most grab a library when they need and never upgrade it. They only do so when there's a necessary security risk or some wiz-bang new feature they need.

The common example: programmer grabs a library to solve a problem; a few weeks later they realize it doesn't really solve the problem and requires modifications to the library to do some custom job; they make the modifications. That's normally the end of the story for several years. No point in upgrading the library unless it really solves a business problem.

That constitutes the vast majority of developers.

Of course there are other organizations that always want the latest and greatest version and are constantly upgrading. To them, I'd say sure, stick to the public methods, otherwise you're creating a big headache down the road. But I wouldn't require it of them.

Closed source dependencies treat programmers as infants. With open source dependencies, private methods are just an organization tool, not an actual limitation on the programmer.

If a programmer cannot access some variable or function because another developer thought that they "shouldn't", how is that not treating them like infants?

I think the API is the GPGME library?


... which internally calls gnpg using the --with-colons argument which is how you're supposed to get machine-readable output.

That said, maybe your use case isn't very accessible via the standard gpg commands. It sounds like you want to parse the ciphertext and extract the recipient keyids? You can do that with gpg --list-packets, but this is really intended for debugging, not automated consumption. That said it doesn't look too hard to parse, but who knows how stable it will be in the future.

(for example)

gpg --list-only --list-packets foo | gawk '/^:pubkey\>/ { print $9 }'

I have a command-line wrapper around gpg for similar reasons - https://dotat.at/prog/regpg/

The magic you are missing is `--quiet --status-fd 1`, then look for ENC_TO lines - the documentation can be found in https://git.gnupg.org/cgi-bin/gitweb.cgi?p=gnupg.git;a=blob;...

Based on my testing this works with gnupg 2.1 and 2.2, so I think it should help with your problem.

Yes, more people should know about the --status-fd option. I only wish there were a version that gave JSON output or something a little more foolproof to parse, the record syntax is easy to analyze with sed/awk/etc but you have to look up the docs for the meaning of each field and I wonder about the quoting sometimes.

Oh god this. GPG's commandline is so inconsistent and frustrating to use. Plus it has all of these concepts that hardly anybody cares about in real life. Like the "trust level" of a key in the system. There are 5 (maybe 6) trust levels and it would take someone really paranoid to use most of them. Of course imported keys default to the lowest (most useless) level so you have to change it, which is a pain in the ass to do from the commandline (you have to write an expect script or do a wholesale import of a "trust database").

And then there was Ubuntu 16 that couldn't seem to import private keys at all or must have required some kind of super secret commandline option to allow it.

Honestly at this point I've been waiting for the inevitable article about how GPG has been maintained by one starving guy in his basement for the past 20 years and it turns out it's a total mess and nobody noticed because nobody was looking. Basically OpenSSL all over again.

> Honestly at this point I've been waiting for the inevitable article about how GPG has been maintained by one starving guy in his basement for the past 20 years ...

Perhaps you missed it in 2015:


The current funding page is at https://gnupg.org/donate/

New GnuPG can use TOFU trust model that simplifies this flow (an example here: https://www.kernel.org/category/signatures.html#using-the-we... ).

> Of course imported keys default to the lowest (most useless) level so you have to change it

You don't need to adjust trust levels, just sign the key locally (lsign) and it'll be valid. (there is a difference between key validity and trust, check out this excellent resource https://www.linux.com/learn/pgp-web-trust-core-concepts-behi... ).

That article makes the Web of Trust seem like an even bigger mistake. Once you have more than handful of keys in the system the interactions become complex, too complex for good security IMHO.

Does it?

For a key to be considered valid it must be either ultimately trusted, signed by ultimately trusted key or one fully trusted valid key or 3 marginally trusted valid keys.

It seems to me there are only two properties to track and quite easy calculation to do (fizz buzz level).

That's already three conditions, and those spider up to the conditions above.

It's like ACLs. On the surface they seem so easy, but in practice they're a nightmare to manage because you have to track so many dependencies to figure out a simple yes/no question.

There is one assigned property (trust) one relations table (signing key, signed key) and one computed property (validity, the output) and one condition.

I'd be very happy to see a simpler solution to the problem of decentralized authenticity.

Some projects do rely on Web of Trust, for example Linux kernel (https://www.kernel.org/signature.html#kernel-org-web-of-trus...) or Arch Linux (https://www.archlinux.org/master-keys/).

Looking at the `sq` command line client doesn't instill hope to me that it is going to be friendly: https://docs.sequoia-pgp.org/sq/index.html But if it's at least going to be consistent that would already be a big win.

What would make a friendly interface in your opinion?

The sq frontend uses git style subcommands to clearly separate actions from options. This is something that gpg doesn't do too well. For instance, commands (e.g., -e) look like options (e.g., -r). If no command is given, gpg tries to guess what you meant, which is perhaps good for users, but bad for programmers. And if an option isn't relevant to a command, it is often just ignored, which again, is perhaps reasonable for users, but bad for programmers.

I have nothing against the git style subcommands, in fact I think they are great. A friendly interface would in my opinion be from the perspective of an end user. In the case if GPG the end user can have a couple of goals in mind, to encrypt, decrypt or sign data. Although you could do a lot more things, for most users those would be secondary goals. I guess the average user doesn't necessarily know about autocrypt, ASCII armor, OpenPGP packets etc. Those users would have to guess whether they need to (do I need autocrypt? Why isn't it a default?). To be honest I don't think the usage output is very bad in its current form, but as a start for something that will evolve over the years I am not so sure.

Now it needs a --output=json option to make it awesome and usable from scripts and programs.

We've been considering this, but we'd rather have people use the library. We already have the start of a Python interface, which is pretty easy to use, IMHO.

But, I suspect that some people will insist on a shell script. So, we'll probably go this route sooner rather than later to avoid developers trying to parse the output of sq in an ad-hoc manner.

I'm sure you've already considered all the possible solutions, but why don't you run gpg in a subprocess as a different user, so it can list "Your" key (unless by "you" you mean "the user that originally encrypted the file")...?

EnvKey[1] might be interesting to you (disclaimer - I'm the founder). It's similar in principle to the homegrown system you've created, but has had a lot of time put into smoothing out the ux. It gives you a single place to manage configuration/secrets for all your projects.

It uses OpenPGP.js (maintained by ProtonMail) and golang's crypto/x/openpgp instead of gpg.

1 - https://www.envkey.com

Have you looked at Keybase? It offers a wrapper around the GPG commands. I know it can be used to encrypt/decrypt and sign stuff, but I haven't looked at it in detail so I don't know if it will suffice, but it might. `keybase help pgp` will list the commands. I'm also not sure if it's usable if you don't have a Keybase account (though those are free).

Are you adding your own identity to the list of recipients?

This works for gpg 2.1.18

gpg -u ${MYHEX} --batch --passphrase-file /path/to/some/hard-coded-passphrase --compress-level 1 --cipher-algo AES256 --sign --encrypt -r ${MYHEX} -r ${RECP1} -r ${RECP2} -r ${RECPn} -o ${OUTPUTFILE}.gpg ${INPUTFILE}

Or, you know, you can contribute JSON output to GPG. I mean, opensource and stuff.

We at keybase developed a PGP replacement called Saltpack. It uses only modern crypto, with all the features you would expect like authenticated encryption and branch-free secret key operations (via the NaCl library). We have a better armoring format that won’t get mangled in modern markdown contexts. It is also integrated with our CLI and in the upcoming release we support encrypting for teams.


The problem with keybase is that it has grown from a simple command-line tool (which I could perhaps trust) to a monster with a GUI, menu-bar icon, resident daemon, which does lots of things, and until recently failed to even properly start for me. The running process daemon is required even to use the CLI.

I'm glad you are making a PGP replacement, but I'm worried that this will head in the same direction. I want simpler, auditable and understandable systems for crypto, not complex beasts with hidden interactions.

I fully share that opinion. I loved Keybase for its simplicity: It was a way to verify keys based on social media accounts and thus a more practical alternative to the web of trust. The client was optional, I could track someone by using bash and curl, and I kind of managed to understand how it works.

Nowadays unfortunately it's a complex system of key management, account authentication, chat, file encryption, and more. I have no way to fully understand that system.

Simplicity is an under-appreciated feature.

Oh yes! Keybase went full-Skype.

I loved the concept of verifiable identity and that I could use it even with "plain" gpg commands without having to trust their client.

Unsurprisingly, there was no way to monetize this, so they've had to pivot to being SlackDropboxIDontKnowWhat.app

Too bad as verifiable identities can already be implemented in pure OpenPGP, see Linked Identities (https://tools.ietf.org/html/draft-vb-openpgp-linked-ids-01) as implemented in e.g. OpenKeychain.

The web site design is so hipsterish :)

I kept looking for "install" or "download".

It's in "repos".

I really need to start using keybase and keybase features. Thus far I haven't, probably due to lack of need. However, I heavily appreciate the work you all do. Of the biggest problems facing humanity in my eyes, several are related to information. Focusing on making information secure is a big boon to those problems.

With that said, after a quick navigation around the site, I don't see any way to compensate keybase for your work. This always makes me nervous, as I hate SV Startup culture and immediately have concerns about who is funding keybase, and how they will put dinner on the table tomorrow; or what compromises keybase might make if dinner is problematic.

I would love a way to contribute to a revenue stream. I love products that are paid for. So please, develop a normal revenue stream. Less startup, more business, maybe with a flavor of non-profit if you really want to tickle my fancy.

Thank you all for the work, and please help us help you :)

This uses the Nettle crypto library under the hood. Does anyone know how well viewed (or reviewed) it is? I'm curious to hear more from the devs on their reasoning for picking it (maybe it was the only one with the features needed and a suitable license?)


Nettle implements most of the crypto needed for OpenPGP, it's small size and few dependencies makes it easy to wrap for Rust. I did review some, but not all of Nettle and it looks pretty solid[1]. GnuTLS uses Nettle, so you can expect there are people smarter than me trying to break it. Initially, I wanted to use Botan, but the fact that it's in C++ means you need to write two wrappers one from C++ to C and one from C to Rust.

1: I checked for proper RSA base blinding, a secure CPRNG, lack of Bleichenbacher Oracles and lack of invalid curve attack vectors. It uses GMP for bignum stuff so carry propagation bugs are unlikely. There are some things that aren't super nice. The CPRNG does not reseed on forks, the included AES doesn't look particularity time constant and the library doesn't use mlock() nor zeros secrets after use.

Alas, I just started working on a Rust wrapper for Botan https://crates.io/crates/botan

The existing Botan C API is in fact sufficient for OpenPGP already, https://github.com/riboseinc/rnp is in C++ now but was originally C and uses Botan's C API.

But Nettle is IMO quite solid and the developer is very skilled, so full steam ahead.

Is there any relation between Sequoia and the BoringPGP spec?

Yes, I'm a co-author of BoringPGP[1]. Sequoia itself will probably support it as soon as Marcus and I get around finishing the spec but it's not part of Sequoia and I only work on it in my free time.

1: https://github.com/boring-pgp/spec

Does it go beyond the minimum and old set of ciphers in use by the usual implementations? I'm asking because I firmly believe that any GnuPG re-implementation should try to improve on the current state of mail crypto [0].

[0] https://blog.cryptographyengineering.com/2014/08/13/whats-ma...

It's interesting that blog post didn't mention one of the largest concerns people should have with PGP -- it still doesn't use authenticated encryption even though the problems with unauthenticated encryption have been known for decades.

(Yes, GPG has the MDC -- but as the fairly recent "PGP/MIME is broken" bugs showed, the handling of MDC was broken in several ways and it can only be detected after outputting all of the data. AEAD is far more cryptographically sound.)

Nitpick: MDC failures can only be detected after decrypting all the data. So the decrypting program would need 2 passes (one to validate and one to present the plaintext) or buffer the decrypted output until it verified the MDC is sound. GPG, however, does neither. AEAD, as far as I understand, suffers from a similar, though less severe problem: decryption can fail at any point due to a validation error. You can be certain that anything up to the failure hasn’t been tampered with, but the question of what to do with the partial plaintext remains. A partial, truncated ISO or script may be problematic, even if it hasn’t been tampered with. (think: truncate rm -rf /tmp/foo to rm -rf /“)

> GPG, however, does neither.

It also will output to --output even if an MDC validation error occurs at any point, which is pretty insane. But from memory this is actually a larger architectural problem (GPG doesn't appear to buffer anything -- which means that when you tell it to write to --output it doesn't write anywhere else first).

> AEAD, as far as I understand, suffers from a similar, though less severe problem: decryption can fail at any point due to a validation error. You can be certain that anything up to the failure hasn’t been tampered with, but the question of what to do with the partial plaintext remains.

That is also a problem, though it would avoid most attacks where programs don't check GPG error codes (which is what the email vulnerabilities a few months ago were about). The only practical attack is what you've described -- some sort of known-plaintext attack where the message already contains the attack payload as a prefix. Ultimately AEAD (over MDC) is a protection against people accidentally doing the wrong thing when using crypto tools -- because users should not trust any of the decrypted text if the message didn't validate.

Let's go through Matt's major points:

    Key Exchange / Key Management
This isn't really a problem with the OpenPGP protocol or an OpenPGP implementation. This is inherent to any system that tries to protect you from active adversaries. If you are willing to use centralization, then you can do something like X509 (what is what TLS uses), but there are many, many cases of CAs issuing bad certificates either by accident or maliciously, e.g., the TURKTRUST incident. You can also do something like Signal with its verified key servers. But, if you want to be decentralized, then somehow you have to get the user involved. So, in my opinion, this is more a criticism of decentralization than of OpenPGP.

Now, that doesn't mean that OpenPGP tooling can't help. In fact, about 4 years ago, several initiatives began working on mechanisms to make key discovery much easier, and mostly transparent for users primarily concerned about privacy (as opposed to those whose threat model includes active adversaries, like activists, lawyers, or journalists). See, in particular, the work that pep (https://pep.foundation) and Autocrypt (https://autocrypt.org) have been doing.

    Forward Secrecy
As I've written before (https://arstechnica.com/information-technology/2016/12/signa...), I don't think that forward secrecy is actually fixing a problem that most people have. Particularly in the case of OpenPGP where most people are interested in encryption of data at rest (messages stored on an IMAP server), which forward secrecy doesn't help (forward secrecy, because it throws away old key material, only makes sense for protecting data in motion).

But, that doesn't mean that we haven't given some thought to the problem. In fact, at the very same gathering, Justus, who is also working on Sequoia, presented a proposal for adding forward secrecy to OpenPGP in a backwards compatible manner. You can watch the presentation (https://www.youtube.com/watch?v=an6oYjikAPY), or read an early version of the proposal (https://mailarchive.ietf.org/arch/msg/openpgp/mk8_FSS-n4DVGf...).

The short version is: OpenPGP already has mechanisms to mark encryption keys as being appropriate for data at rest or data in motion. Until now, no implementation has bothered with this distinction. We propose creating two encryption-capable subkeys, one for data at rest, and one for data in motion, and rotating the one for data in motion once a week. To ensure that a sender has a non-expired encryption key we pre-generate keys, and distribute them via the keyserver network.

    The OpenPGP format and defaults suck
It is true that OpenPGP has standardized a number of ciphers that are no longer sensible, includes compression support, etc. But, OpenPGP is over 30 years old. In that time there have been many improvements. But Matt is right that these improvements come slowly. This is partly due to the lack of funding: the industry choose S/MIME over OpenPGP. (Although S/MIME is cryptographically worse than OpenPGP. See EFAIL for a critical example of why.)

A major difficult to deprecating old ciphers is that OpenPGP is used for data at rest. And people rightly expect, I think, to be able to decrypt data and verify signatures from X years ago. This means we can't completely drop support for, say, CAST5: people wouldn't be able to decrypt old messages. Matt seems to ignore this bit, and focuses primarily on real-time communication (e.g., Signal), which only needs encryption for data in motion, i.e., the encryption is stripped and only archived on a trusted device (e.g., not an IMAP server).

One thing that we are consider in Sequoia is requiring the caller to provide a timestamp when verifying or decrypting a message. The timestamp can be used to choose defaults that are appropriate for when the message was allegedly created. The timestamp can be double checked with the timestamp in the signature. In this way, if someone tries to send you an email using a deprecated cipher, they'll also have to set the timestamp in the email to, say, 1997, which would hopefully be suspicious. Likewise, something like a can be shown when the message doesn't meet the current standard.

I hope that helps! If you have any other questions, you're welcome to ask here, or on irc (#sequoia on freenode) or on our mailing list (devel@sequoia-pgp.org).

:) Neal

Maybe also interesting: https://neopg.io/

It's (also) from a former GPG dev, bases on the GPG source where some of the targets are: * switch to C++, allowing to reuse the legacy code, with lots of code thrown out * move to a single binary (again)

Now, C++ is naturally not your fancy new language and the rewrite-it-in-rust people may run to their pitchforks but he has a blog post entry where he argues his choice: https://neopg.io/blog/cplusplus/

TLDR: He can build upon the well established GPG, C++ is mature and it's flaws are known and can be avoided an worked around.

He even mentions the Sequoia Project in the last paragraph, and envies them a bit as they can use Rust.

(for the record, nothing against rust or sequoia, just wanted to show a related project)

While C++17 is a pleasure to work with, versus C++98, this kind of thinking only works out in very small teams (<= 5), very focused on quality assessment tooling like analyzers.

Making developers avoid C style coding on C++ code is a continuous fight.

While at the Delta X gather last week, we recorded an introduction to Sequoia. The presentation covers our motivation for starting the project, an overview of Sequoia’s architecture, and the project’s status: https://www.youtube.com/watch?v=NBbtIZipeNI

The tl;dr is that we’re actually pretty far: Sequoia is already being tested with the p≡p engine, and other projects, like Delta Chat, have begun replacing GnuPG or NetPGP with Sequoia.

What about opmsg as a GPG replacement?

> opmsg is a replacement for gpg which can encrypt/sign/verify your mails or create/verify detached signatures of local files. Even though the opmsg output looks similar, the concept is entirely different.


Your messages have perfect forward secrecy except when they don't, because it uses DH but falls back to RSA when it runs out of keys. That's a security failure waiting to happen; IMO no forward secrecy is a more user-friendly model than "perfect forward secrecy except sometimes not".

Support for multiple recipients is inherently a mess in that kind of model.

No effort is made to even try to solve the identity problem

PFS aside it doesn't seem to do anything you couldn't do with GPG - e.g. there's nothing to stop you verifying GPG key fingerprints by hand or using different keys to communicate with different recipients. In particular it doesn't seem to offer any improvement on the big problem that this is about - good MUA integration.

Ultimately that looks like a good project that takes those techniques as far as they can go, but also shows exactly why those techniques never made sense for an email-like environment.

A bit of a tangent, but is something like BIP39 (seed words that can be used for backup for cryptocurrency wallets) possible for gpg? It would make backups 100x simpler.

That name is unfortunate because most of the world has no clue how to pronounce or spell it.

Huh? Is the pronunciation given at https://en.wiktionary.org/wiki/sequoia#English incorrect? It's exactly what I'd expect of a word spelled "sequoia", and I'm frequently baffled by the inconsistencies in pronunciation that plague English otherwise ("ough").

Although vowel centering makes it difficult to distinguish the pronunciation from that of a hypothetical "siquoye" or similar, a quick look at the list of translations indicates that most languages use equivalent vowels, unless they have an etymologically completely unrelated word.

As far as names go, that one seems to be of the easier kind.

I've always pronounced it "sə-koy-ə" as opposed to "sə-kwoy-ə".

That might be a Britishism; Wiktionary does call that out as the UK pronunciation. I also only know it as "səˈkɔɪ.ə".

How is that different from any other name? Actually Sequoia is an English noun and as such I expect it to be more familiar than many other names. Even better the same word exists in many other Latin script languages. I know that there are a many more languages in the World but none of them is as widespread as English.

> I know that there are a many more languages in the World but none of them is as widespread as English.

You will be surprised.



If you count by L1 speakers, English is only third. If you count by number of people with working proficiency of the language, English has somewhere between 1 and 2 billion speakers, which is probably more than the number of Mandarin speakers. Second language stats are really sketchy, but English is unique for being a language which is among the top native languages and having many more non-native speakers than native speakers.

As you can see, I respect you more: I spend some of my time to provide links for you as proofs. But nevermind, it's offtopic anyway. And it's not so important for me to argue about :)

L1 speakers are relatively easy to gauge, since national censuses would provide sufficiently accurate statistics. Accounting for L2 speakers is very difficult, since there's often little-to-no census-level statistics on the matter, and the definition of proficiency for estimates is difficult to ascertain. Order-of-magnitude estimates are roughly the only level you can do.

On that matter, there are some languages which are clearly much more common (over twice as common) as second languages than native languages. English, French, Malay, and Swahili are obvious candidates for this: English is a common secondary language, well, everywhere; French is common second language in Africa; Swahili in East Africa, and Malay in Malaysia/Indonesia.

And still erkl did it and you were too lazy.


Sequoia is a Latin word. The pronunciation in English, Spanish, and Portuguese (and probably others) is almost identical.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact