Hacker News new | past | comments | ask | show | jobs | submit login
An update on GnuPG (lwn.net)
158 points by jlgaddis on Oct 11, 2017 | hide | past | favorite | 46 comments



I agree with the statement in the article that the current Web-of-Trust model is broken. I think that is one thing that the folks at keybase (keybase.io) understand, and I like their model. I like it enough, in fact, to actually use it, something that I cannot say about GPG despite having tried it on numerous occasions over the years.

GPG is proposing going to a TOFU model (trust on first use, much like ssh works). I'll be curious to watch if that takes off, it seems like a step in the right direction.

I think 'trust' as a concept is difficult to codify into a protocol. What alternatives are there that would be better than what keybase.io does, or what GPG is proposing?


Last time I tried to use keybase it required me to paste my private key into a browser before using any of the advanced features like chat. This seems unnecessary and doesn't make any sense from a security perspective. Has the situation changed? I won't use their service until this is fixed.


If I recall correctly, you can choose to let keybase store your password-protected private key for the purposes of decrypting messages through the website, but that's not required, and the advanced features (e.g. chat) don't work without a local install. Everything that can be delegated to the app (GUI or command line) generally is. The keybase team seems to take this quite seriously, and they've had documentation on how to use the platform without giving their servers any information since at least when I joined in early 2014.

Give it a shot, it's quite painless as far as crypto products go. You can always choose not to use it if you decide it's storing too much information. Happy to provide an invite if you (or anyone else) needs one.


I'm curious about your statement that you're using keybase, but not GPG. If you're only interacting with others that are using keybase, I assume that's possible, but if you're interacting with others, you're going to have to use GPG, right? Keybase can handle public keys, but your private key is yours. Or am I missing something?


Keybase has increasingly moved to its own crypto model that isn't GPG backed. Even in cases where "traditional" PKI RSA and ECC curves are used, it doesn't use the GPG tools anymore and instead other open source implementations.


> Support for 4096-bit RSA keys has been in GnuPG for some time, but Koch contends that real security will require 16Kb keys [...]

Could someone corroborate or cite sources? I thought 2048-bit is still considered secure, and 4096-bit slightly overkill.


> Could someone corroborate or cite sources?

There aren't any. I was also quite surprised, not sure what Werner exactly said, but I doubt he wanted to seriously question RSA-2048.

Also the various key length guides people like to cite where someone says "key length x bit is exactly secure until the year 2030" are usually nothing more than things people made up and they're not based on scientific sources.

The state of RSA security is actually relatively clear: 2048 bit is secure against all existing classical attacks. It would require massive and unlikely mathematical breakthroughs to change that. If you add in quantum computers then RSA is pretty much broken with all normal key sizes (but ECC is as well). There's a difference between 2048 and 4096, but it probably doesn't matter that much in practice.


yes, is there any reliable place to look for the current "best" practices for keys generation? (as in, good enough for the foreseeable future)


Similar, but slightly unrelated (SSH keys), I find this one of the best resources to look at:

https://wiki.mozilla.org/Security/Guidelines/OpenSSH


https://www.keylength.com/ presents the recommendations from various best-practice documents for many key types and cryptographic operations.


If I'm using the comparison tool correctly, then it shows that 2048-bit keys would be fine up to 2020 ~ 2030, and 4096-bit keys beyond ~2022. This is in line with my understanding.

FWIW, ssh-keygen(1) generates 2048-bit RSA keys by default[0]. Generally if I had to appeal to authority over security matters, I refer to the OpenBSD / OpenSSH guys.

[0] https://man.openbsd.org/ssh-keygen#b


SSH Keys are for login authentication. If tomorrow we crack RSA-2047, you can replace them before we get to 2048 bits.

GPG Keys are for secrecy. If the adversary gets a ciphertext, he can hold it until 2030 and decrypt then. Very different model!


Hmm, good point.


I am not convinced they are very different models.

My argument: An attacker can save the encrypted SSH traffic going over an insecure channel, then hold the traffic until 2030 and decrypt then.

Do you find a flaw in my argument?


Yes. It is, forgive me for being blunt, nearly completely wrong.

For starters, there's two different kinds of keys in use: symmetric and asymmetric. The ratio of "bits" to "strength" is completely different for the two categories.

Asymmetric keys are typically only used to handle identity, then bootstrap a selection of symmetric keys (which are faster to use, generally); and then the symmetric keys used are typically based on Diffie Hellman key exchange, which is a whole 'nother ball of wax.

Which bits you have to hold onto, and what you get when you compromise them, is not a flat field. Compromising the asymmetric keys used for identity at the beginning of communication means you can forge that identity in the future; but it doesn't necessarily mean you get to compromise other communications made with yet other keys which were merely agreed upon under the situational aegis of those asymmetric keys at a previous date.


Google (Perfect) Forward Secrecy

There's a mathematical trick (actually these days several different ones, but the original idea is the same) which lets two people agree on a number without either of them saying what the number is OR any witnesses knowing what it is by watching the conversation. If you struggle with high school mathematics look for videos which demonstrate this trick using mixing paint, so you're not distracted by mathematics you don't understand.

This trick is routinely used in realtime communications (HTTPS, SSH, secure messaging protocols) to make a throw away transient key [actually usually two and sometimes four] used only for one conversation. Once the communication is over, both parties throw away their transient keys, and even somebody who knows the long term keys can never find out what they were because of Forward Secrecy.


In addition / supplement to what @heavenlyhash says:

Gnupg cannot do a diffie hellman key exchange to set up the session/symmetric key: one party unilaterally generates a random symmetric key (eg: aes key), encrypts the session key with the recipient's public (asymmetric, eg: RSA) key, the message with the symmetric key and sends both cipher texts as the message. A passive attacker can store this, and will only need to crack/gain access to the recipient's private key, to recover the symmetric key, and then the message.

Active/online protocols can use a key exchange: today that typically involves ephemeral (elliptic curve) diffie-hellman. The exchange is authenticated with the asymmetric key - but the session keys are independent of these "permanent" keys.

So the eg: RSA key is used to make sure you're playing "guess the number I'm thinking of (the random symmetric session key)" with the intended recipient, and not an active attacker - but when the session key is thrown away, there's no way to recover it for either of the participants or a passive attacker.

See also:

https://lwn.net/Articles/572926/


With encrypted SSH traffic, there are normally three kinds of keys involved. The traffic is encrypted with per-connection symmetric keys, usually 256 bits. These keys are obtained through a Diffie–Hellman key exchange or similar, which uses per-connection randomly-generated keys. Finally, the long-term "SSH key" is used to sign these randomly-generated keys (and other things).

If you hold the traffic until 2030 and want to decrypt it, you have to break either the symmetric keys, or the key exchange. Breaking the "SSH key" would allow you only to forge a new signature, but that doesn't help decrypting past traffic.

That's the difference between the models: a key used for encryption, once broken, allows you to decrypt. A key used for signing, once broken, allows only forgery. And for SSH, forgery must be done while the forged key is still accepted (known_hosts or authorized_keys).


Secure Secure Shell guide is a good security baseline, including generating long enough moduli and using EC. https://stribika.github.io/2015/01/04/secure-secure-shell.ht...

In the future, recommendations will need to be increased and less secure methods deprecated.


I don't know much about the curator, but nacl is apparently on the same domain:

http://safecurves.cr.yp.to/


Daniel J. Bernstein or djb is a very well known crypto expert with no apparent secret agendas.


Use 4096 for personal email or anything important. Other uses maybe fine with 2048. Slighly overkill is better than insufficiently expensive ASIC pwned.


Quantum computers?


Then 4096 keys are also worthless.


Yeah, but are 16k keys? How does quantum factorization scale?


> Historically, using HSMs has meant smartcards, but Koch noted that even though these implement an open standard they do it with proprietary code, leaving us no current choice but to evaluate the vendors for trustworthiness, and then choose a product. Though this is what Koch himself does now, he'd like more freedom, and is working to bring to market the Gnuk token.

As far as I’ve understood, the NitroKey[0] is fully open and commercially available. Why not it?

[0]: https://www.nitrokey.com/


Just updated to the newest version, which seems to change quite dramatically how gpg-agent works. I can no longer figure out how to limit a gpg-agent to the current shell (the old version relied on environment variables which were easy to keep local). It bothers me that now, if I do anything with gpg, any other program on the computer can just decrypt information at will. Any solutions for this?


In a world where we measure webpage size in megabytes, why do people consider key size to be so important? I can see TLS caring due to latency, but gpg? I could attach a McEliece quantum 8MB key to every email I ever send, so your "large" RSA keys aren't a big deal.

Edit: oh, smart cards...


Protocols.

You really don't want to sign every TLS packet with a 8MB signature or in this case, send 8MB key material over TLS before properly opening the session.

Even if you are the 1% with gigabit connections, that is going to induce some decent latency into the system.

It's simply wasteful.


Heh. The way web bloat is going, I can totally see a framework "optimizing" this by showing the user a spinner over HTTP while an ajax request does the first tls handshake, then redirects the user to the https version of the site.


That defeats the purpose of https - the code/response that dictates the redirect can be intercepted and redirect the user to a malicious site.


Also, are operations linear in key size? This source says for k bits in a key, operations with the private key are cubic in key length, and key generation is O(k^4):

http://x5.net/faqs/crypto/q9.html

but seems to be from the 90s. Maybe faster algorithms are merited now...


They are significantly slower. But for GPG it doesn't matter, as long as time is reasonable. It uses RSA to encrypt short key which then encrypts the entire message using something like AES. But it would matter for website served over HTTPS.


> In a world where we measure horribly built webpage size in megabytes

Fixed that for you.


Does it still use a ridiculously low number of iterations of a KDF that is neither time- nor memory-hard for deriving symmetric keys from passphrases?


No, I believe it uses a calibration routine to pick the number of iterations nowadays:

https://git.gnupg.org/cgi-bin/gitweb.cgi?p=gnupg.git;a=blob;...

So it depends on the hardware it is run on, and has a lower limit as well.


But still not an appropriate PBKDF for 2017 - and the lower limit is only 65k iterations.


> RSA, he said, is not likely to stay secure for much longer without really large keys

Anyone got any more info or a source on this?


Apart from confounded's response, we are getting constantly better at factoring nunbers. Due to that the security margin of RSA has been dropping a lot faster than computer power has increased.

In fact, we don't even know whether RSA security is based on something provably hard.


I wasn’t at the talk, but I imagine this is may be related to beliefs around quantum computing gradually moving from ‘if’ to ‘when’, in the last few years.


The problem is that when QC come, ECC will not be in any better position than RSA...


Awesome news. Also, instead of ssh private keys, I use monkeysphere which stores them in gpg instead. With git and ssh scripts, it’s easy to make it automatically ask for your psssphrase on first use.


I use gpg-agent's ssh support and a yubikey to store my SSH private key. It's a pretty nice experience, being able to switch between computers having my SSH key always with me.


I'm using it like that as well, It's a really nice experience once you convince all the applications using ssh to use the gpg-agent's socket.

Have you enabled any other yubikey features while having all you gpg + ssh keys on it? Like for using U2F? I'm a bit scared I'll wipe my key ('-_-).


I asked a similar question the other day: https://news.ycombinator.com/item?id=15431299

Basically, it all works fine (I've verified it myself). I've been using a Yubikey for both SSH and challenge/response for quite a while now. A few days ago I started messing with U2F as well. The first that happens is gpg-agent "gets confused" after U2F auth and you have to remove/insert your key and/or re-enter your GPG PIN on next use (cf. linked thread).

Next up for me is figuring out how to disable U2F on my Nano's and use separate U2F-only keys for that (without any conflicts or issues, hopefully).

N.B.: I don't use the OTP functionality at all, currently. I'll probably try out the PIV stuff soon as well and I expect no conflicts or issues with the existing stuff (GPG, C/R, etc.) I have setup.


> The first that happens ...

  s/first/worst/




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: