Hacker News new | comments | show | ask | jobs | submit login
Bruce Schneier has changed his PGP key to 4096 bits
222 points by oktypok 1225 days ago | hide | past | web | 139 comments | favorite
He decided to change his 16-years old 2048-bit key on the same day he let the world know he was working on Snowden files. Possible reasons:

- he forgot his password

- he lost his private key

- he knows more than he can tell us




- he really doesn't use his PGP key all that often, had the same one for 16 years on god knows how many computers, and decided that if he's going to generate a new one, he might as well send a message with it.

Or, similarly:

- Snowden et al asked him to create a new keypair with very strict cleanroom practices[0] to be sure as to the sanctity of the private key. It's trivial to bump up the key length when doing so.

[0]: See the August 18 NYT article on Laura Poitras and Glenn Greenwald for a peek at the "operational security" that Snowden demanded. http://www.nytimes.com/2013/08/18/magazine/laura-poitras-sno...

Normally i'd let it go, but i actually would like some clarity on your intent here. Are you implying that Schneier doesn't use encrypted communications on a regular basis, that PGP is impractical, or both?

(and to be clear, my intent is not to bait, i'm actually curious)

Most people don't use PGP on a regular basis. I'm use PGP a lot, more than I think most HN readers, but most of the people I talk to (even in my own field, which is full of secrets and adversaries) don't have PGP keys.

I met Bruce this last weekend at a conference, and every single business card I collected had the individual's PGP key on it.

Well, I'm a professional security researcher, and I end up using ZIP+AES more often than I do PGP.

How do you securely share the AES key?

AES is really great compared to RSA, so I put my AES key on my website instead of my RSA public key. It's made it very easy for people to contact me securely.

One of the saddest things about Snowden/Manning/NSA/etc. is that sarcasm and irony are no longer really safe to use in anything even vaguely related. Must really hurt The Onion.

Exactly, the same thing i am wondering. AES is symmetric. So same key is used for encryption and decryption against RSA which is asymmetric. So putting key on website is effectively the same as not encrypting at all. Did i miss any thing here?

just the humor

the trick with humor outside your main field is detecting it. I was like: "isn't AES a symmetric stuff ?" for a few second.

the other trick is naming thing: why would such a vague name like "Advanced Encryption Standard" designate specifically symmetric cipher?

Is it really secure if everyone has the key?

No, it isn't. He/She is trying to be funny.

AES is symmetric...

Yeah, that makes it really convenient for me if I have to decrypt an important message from a public computer.

exactly... I am just as confused as you are. What's the point of using AES if you are putting its key out on the open? O_O that's like using a very sophisticated lock on your front door and put up a sign saying "The key is under the Mat"

I believe the person was joking.

The phone.

Does the reason for that ever come up in a conversation? I thought that everyone working in security used PGP a lot.

In fact, it is unusual to see people even sign emails in the (academic) cryptography community, let alone encrypt messages (at least in my experience). It is surprisingly rare to see academic crypto researchers actually use the systems they design, even for basic things like signing and encryption.

Strategically, you are probably better off not signing a message unless you want the message to be verifiable.

There's also the paranoia of non-repudiability with signed messages. In general, there is minimal benefit just signing a document. I don't care if someone I work with is spoofed because it will become obvious very quickly.

It's only in a very few cases where there's an advantage in signing a document, and it's usually more in the verifiability of content (so that you can verify that nothing is lost/changed in transit) than in the verification of identity.

Given the lack of adoption of PGP/GnuPG in email clients vs. S/MIME, if I'm signing my emails without encrypting them, chances are the recipient would still be able to read my emails and, knowing my writing style and given the context, be able to suss out that I was in fact the author.

I use the word "paranoia" intentionally because there's a lack of meaningful legal precedent establishing that a gpg-signed message is enough to establish authorship. In a civil case, sure, it looks bad, but you could easily say,"oops, I stored my public key on [vps or cloud service], which was a well-known victim of a hack."

In a dozen years, I think I've had a grand total of 4 clients (out of several hundred) that ever used PGP (despite recommending it specifically on project kickoff calls, particularly for communicating discovered vulnerabilities). I only had one who already had a key.

Outside of those few work examples, I don't think I've ever sent or received a PGP encrypted message. In fact, the only signed messages I think I've ever seen were mailing list messages from people who signed all their messages.

Since 1997, I've collected public keys from a grand total of 17 friends/colleagues/business-contacts.

(Note: FWIW, the email address in your profile doesn't provide a public key from pgp.mit.edu – I would have added you to my keychain and sent you a "Hi! Isn't it nice to introduce yourself without the NSA listening in!" email…)

I just searched pgp.mit.edu and it looks like the first key I have on there is from 1998[1] (with several others from previous jobs, although I don't know why I had two keys when I worked at BBN).

I don't think I even have a key for my current email address.

[1] http://pgp.mit.edu:11371/pks/lookup?op=vindex&search=0x867C6...

Oh, look. It's the government contractor sowing FUD.

Huh? I think you responded to the wrong comment.

As it's commonly called: "endpoint security".

If his computers have ever been compromised, his PGP private key would likely be as well. Considering all the news about US gov spending millions on rootkits, it was likely a wise choice to generate a new one in case a previous one was likely to have been compromised.

I'd guess Schiener would be a target of interest by some state (as is anyone working with encryption it seems).

Considering the fact that most people at CRYPTO, Usenix Security, IEEE Security and Privacy, and other prominent cryptography and security conferences do not have PGP or S/MIME keys...it would not be surprising if Schneier was not using PGP on a regular basis.

The post to which you were replying was amusing at the cost of a lack of rigor.

Bruce generally does not sign/encrypt his email because he views email as a low-security communications mechanism anyway.[1]

He generally advises a rational risk assessment when determining how much security to apply to a process. He often uses the example of locking doors on your house, etc.

In re PGP, he's been critical of a number of shortcomings in PGP and GnuPG since the beginning, but by the same token, one of his first hires at Counterpane was Jon Callas.

[1] http://www.esecurityplanet.com/trends/security-tips-from-bru...

Alternatively, it could mean that while he uses it often, he does not often use it for things that are actually important. Faced with unusually important communication, he may have decided to create a new key that he could be confident was still private.

I've had a PGP key since 2001. I upgraded to 2048 bit some years ago due to the increasing weakness of 1024 bit keys.

I think I've received maybe one message encrypted to me. Everything else I use PGP for is to send signed emails to mailing lists.

I would like to use PGP. In my line of work folks use computers as television sets. They open browser, pdf reader, office software. I am afraid if I am to use PGP just for signing my emails it might look like spam to them.

Schneier just did an article about public key crypto weaknesses and advocated to make larger keys. Its on his blog from an entry last week when he first got hold of the snowden docs. Did he sign the new key yet? At first he just generated a new key with no sig leaving ppl wondering if he was replaced by a NSA robot

I'm not sure this is so much an "or", but maybe, at best, an "and" given the circumstances of source material he is currently working with. I'm not as ready to assume the timing is only routine maintenance.

Who says it has to be "routine" maintenance? It's obviously not; he's changing his key after 16 years. What's more likely is that he's just much more aware of his PGP key now than he was in the preceding years, because of the prominence he's taken in the story and the fact that he's now actively courting leakers.

FWIW, I emailed Bruce who said: >> It's longer, and as long as I was already creating a new keypair there was no downside.

If we loosen "lost" in "he lost his private key" to include "lost control of", then that still works.

Bruce's article on staying secure from the NSA[1] talks about using an air gapped computer to avoid being compromised via the network. If he hadn't been keeping his keys on such a machine previously - recent disclosures may have changed his mind and forced him to regenerate his keys.

[1] http://www.theguardian.com/world/2013/sep/05/nsa-how-to-rema...

I think this is the most likely reason.

So I have a GPG key. I used it a couple of times. Currently, it's most useful to me to sign my own Debian package repository. However, I can't seem to figure out how to get into the whole Web of Trust thing. Nobody I know has their own GPG/PGP key that they use and have signed by others and tools like BigLumber and other places where I looked for key signing parties have not turned up any results. I not spending all my free time looking for GPG users, but I have spent what I feel is more than a casual amount of time looking for people to exchange key signatures with. What do y'all do for this? Any advice?

Edit: I am located in the North Eastern part of the US.

Edit 2: perhaps we need a geolocation aware social network a la Square but just for notifying you of other nearby PGP users...

Webmail services could have jump-started WoT a decade ago, much like Hotmail jumpstarted popular email usage. [1] Had they "turned on" PGP for all their users, and then made it easy to tell when messages were being sent securely and when they weren't, it would have completely (and for the better) changed how we interact online (e.g., no need to "sign up" for web services, or "sign in" to every single site we visit; a lot less spam; and incredible other potential besides).

I think we can now surmise one big reason why they didn't.

(tptacek will say that web-based PGP is the wrong way to go because it's too insecure: fact is it's still way more secure than sending cleartext emails, and in any case the point of it is to bootstrap adoption and hopefully trigger an email "arms race")

[1] Gmail even began with some support for PGP signature verification http://googlesystem.blogspot.com/2009/02/gmail-tests-pgp-sig... ... and then stopped. Anyone on the inside know why?

Webmail + PGP really is insecure. The only way to do it properly would be with some OSS plugin + completely separate process for actually reading/writing emails.

However, I think we could use a better way to associate emails with PGP keys. For example, my email uses a domain I own. I have HTTPS on my domain, so I can deliver my public key to you securely over the Web. Alternatively, Google could have a service where you securely ask it "what is the key for example@gmail.com" and it responds with the key. Bootstrapping a full WoT is hard, but often times the requirements are much less strict. Most times I just want to know that if I email you that you are the only one that can read the email. I might not even care if you are truly the person you claim to be, as I know you by your email handle more than by your real name (as in emailing Satoshi Nakamoto). Of course having a webmail provider tell others what your public key is means you have to trust your webmail provider not to lie. You also have to trust their delivery mechanism (HTTPS) and the people that issue them their SSL certificates (as we know this can be circumvented by motivated governments).

I understand why GMail wouldn't want to support PGP. They read your emails to target ads at you. Without that there would be no GMail. If you encrypt everything you send/receive and GMail cannot read it, then they have no way to monetize it.

There are two concerns:

1) Security of email messages in transit, assurance that you're receiving emails from the person who claims to be sending them, etc.

2) Preventing your email service provider (and any MITM) from reading your emails.

These concerns are relatively independent of each other. While if you're a die-hard PGP advocate you'll want both 1 and 2, PGP-in-Gmail gives us 1, and that's a pretty great start.

Right now we have neither, and we're all much poorer for it.

PGP-in-Gmail instantly gives it to everyone that has an @gmail account (and anyone else who has signed up to a WoT). It would probably insist that you use two-key authentication. And it would work like this

0) Every message you send is automatically signed by you by default. 1) type in >=1 email addresses in To: bar 2) If all of the email addresses you sign have public keys associated with them that Gmail can locate, they all look "Green" (or whatever) and the [Send] button becomes [Send securely]. 3) If any of the addresses doesn't have a public key associated with it, then nothing is encrypted (i.e., exactly the behavior we have now).

If you don't trust Gmail, you shouldn't trust it any less if/when they deploy PGP for it. And no one is saying you have to use it. All I'm saying is that Gmail, Hotmail, etc. seem to be the best equipped to trigger the widespread adoption of PGP.

> If you don't trust Gmail, you shouldn't trust it any less if/when they deploy PGP for it.

The problem here might be that people (including Google, I guess) don't want users to trust anything MORE THAN THEY SHOULD, which is a major risk in a case like this. Sometimes security features can be counterproductive since they can lead to the users making bad assumptions and therefore bad decisions that they otherwise wouldn't have made. PGP in webmail implemented just in JS is likely one of these things that could make things worse due to how users treat them.

I'm actually suggesting that if we're going to trust Gmail completely anyway, we might as trust them to encrypt-and-decrypt everything server side. No need for any fancy PGP in JS. Gmail still gets to read your emails and generate ads (though it might not be able to do offline analytics to your emails). The point is that with PGP-in-Gmail we can at least trust that the email in transit is much more secure, and furthermore we can verify the identity of anyone sending us messages, too.

> However, I think we could use a better way to associate emails with PGP keys. ... > Alternatively, Google could have a service where you securely ask it "what is the key for example@gmail.com" and it responds with the key

There is the PGP global directory which does essentially that (i.e., given an email address provides a key for with the email address verified by sending an email there... which is as good as what Google would be able to provide).

Doesn't seem to get much more use either way: I don't think the problem with driving up PGP adoption is the distribution of public keys.

I've seen keysigning parties at a few technical conferences. Find one, preferably one that caters to programmers and/or sysadmins rather than managers and/or marketers, and which has extra space for not-scheduled-in-advance meetings. Then schedule a keysigning party and see who shows up. Remember to bring ID.

Sounds like a fun event for a hacker space.

You might be able to find a nearby willing Debian developer https://wiki.debian.org/Keysigning/Offers#US

If you're near a university or work somewhere in tech you'll probably be able to find a DD within a degree or two in your network.

Looks like there is exactly one in my state. I will reach out to him. Thanks.

Hi NJ :D could be CT but figured it'd be New England at that point. Good luck though, but either way there are a lot in NY so maybe some luck there.

I am in CT :)

>perhaps we need a geolocation aware social network a la Square but just for notifying you of other nearby PGP users...

To what end though? What do you share in common with them other than the fact that you're both probably interested in cryptography? Just because you can easily communicate back and forth with encrypted messages doesn't mean you'll actually have much to talk about.

I want people to sign my public key. I don't care who they are but the more people do the more people can trust that it really is my key. Imagine if your phone told you that you are at the same coffee shop as someone who is also a registered PGP user and has not yet signed your key. That would be pretty easy, right?

I want something that makes signatures of defined meaning.

Like, signatures on my key for "controls the hn:rdl account", "controls rdl@mit.edu email address" etc. With dates, so I can accumulate multiple signatures over time.

I'd trust a key from someone with 14 years of "controlled ... email address" signatures on it more than someone showing me a plastic ID card in a bar.

http://biglumber.com/ is any oldy but a goodie. There's room for improvement in this area.

Not to be snarky, but have you tried using google?


I get a couple requests a year when someone comes through town. It could do with more participation, though. :-)

Not to be snarky but in my original comment I said that I did use BigLumber without much success.

Not to be snarky but oops, my bad. :-)

Local Linux User Groups often have people willing to sign the keys of other members in exchange for a signature on their key.

Check out http://www.biglumber.com/ as well

It's curious that he didn't sign his new key with his old key. Does anyone have a good explanation for why he wouldn't want to do that?

If someone can crack his key old as of 2020, then they can start distributing a fake Bruce Scheneier 4096 bit key at that time. He might think it's better for him as something of a security celebrity to just publish a new key.

If the old key is revoked, and is the only trust path to the new one, it's a worthless key.

In the post he also describes that he now uses a new process which involves a computer that has never been connected to the internet and its sole purpose is encrypting and decrypting files. Why not use it to encrypt and decrypt emails as well? That'd also potentially involve generating a new key pair.

> 3) Assume that while your computer can be compromised, it would take work and risk on the part of the NSA – so it probably isn't. If you have something really important, use an air gap. Since I started working with the Snowden documents, I bought a new computer that has never been connected to the internet. If I want to transfer a file, I encrypt the file on the secure computer and walk it over to my internet computer, using a USB stick. To decrypt something, I reverse the process. This might not be bulletproof, but it's pretty good.

I assume it's a well thought through and properly risk-assesed security/convenience tradeoff. Handling encrypted files is much less frequent than handling encrypted email - and putting an airgap between the internet and your email is likely to cause more grief than the security improvement it creates.

I've got the seeds of an idea which has been kicking round my head for a few weeks now - a rasbperrypi (or similar) with GPG installed on it, which is connected to my main computer as a usb device (possibly impersonating a usb keyboard). The 'pi could be sent encrypted data over the usb/serial connection, and send back the plaintext. The 'pi would have no network connection – reducing the attack surface for someone trying to extract my private key remotely to some exploit that'd work over a tightly constrained serial connection. Sort of like a RSA SecureID on steroids - here's a device with a "cryptographically secure secret", but instead of just displaying TOTP tokens, you can feed it encrypted data and have it send back cleartext (optionally with a keypad and PIN/passcode required, but it's not "secure" against physical access, so I'm probably not going to try and implement that...).

That definitely fits with the 'hacker ethos' and all, but why not just use a smartcard with a Class III reader (i.e. dedicated pinpad and display on the reader itself).

Support is already integrated with GnuPG, they are specifically designed to prevent key material leaking, and they have some other nice properties (like self-destruction after three incorrect admin PIN attempts).

Smartcards are cheap, anyway: http://shop.kernelconcepts.de/product_info.php?cPath=1_26&pr...

Mostly because I've got a pair of RspberryPis – and I'm doing this mostly out of curiosity and learning (and a little bit of "sticking it to 'the man'"…).

If you have to store encrypted credit card data, that's the recommended way of keeping it safe. Your Pi is analogous to hardware encryption devices that have been available for some time.

With linux, is there any way to compromise the USB stick used for the air gap? AFAIK the Stuxnet virus was originally spread via USB stick, however I reckon that it involved Windows machines that are known to execute files on USB sticks.

If you aren't 100% sure of the providence of your usb stick, you can't really rely on it for _anything_ – have you seen Travis Goodspeed's "Writing a Thumbdrive from Scratch" presentation?


Indeed! An intelligent "USB Device" is what was used to originally jailbreak the PS3. [0] A USB device (commonly using an AVR USB microcontroller) sends differing USB descriptors at different times - a sort of double-fetch vulnerability. The exploit leads to complete hypervisor access. Do NOT trust your USB device!

[0]: http://ps3wiki.lan.st/index.php?title=PSJailbreak_Exploit_Re...

I think some basic opsec is in order here. The airgapped computer should format every thumb drive plugged into it, or even better, should ignore any USB device that is not the specific thumb drive you are using. All of this could be configured without terribly much pain (which is not to say it is trivial) in GNU/Linux. Not perfect, of course, but would stop quite a few attacks and reduce the chance of carelessness screwing up security.

Years ago, a classmate of mine built a rig out of a receipt printer and one of those old handheld scanners to provide an "air gap", though it never really worked (sort of an art project at the time). Might be time to revive the idea...

How about 2 serial ports, connecting only TxD, RxD and GND? 3-wire RS-232 basically has no attack surface, there's no protocol to speak of. [edit: shabble already suggested this]

Something very similar to this has been used in the military to "bridge" network barriers at differing security levels. The US Navy uses "SDR" (Secure Data Replication) to transfer content under control.

You could get all stuxnet and exploit the various applications (such as the components that inspect zipped content), but the transport itself is a simple file copy over a bitstream. You could do the same thing with kermit and uuencode a bit more easily.

Meh, you can do fine by using two one-way ethernet cables (you might have to cut the receive wires yourself), and some tweaked network stack.

The ol' DIY Data-Diode[1]

I've heard of using serial lines/modems with the appropriate tx->rx cut, but I don't know if it would actually work for ethernet (maybe 10BaseT only?)

[1] https://en.wikipedia.org/wiki/Unidirectional_network

I'm just speculating here... but I think it could be done PC-to-PC up to 100mbit. IIRC, there's a "link" signal that normally exists -- you'll have to configure both cards to ignore the link signal. (Which, I suppose, would mean that this wouldn't work going to a switch or hub with factory-default firmware.)

Half-duplex, 100mbit, ignore link. I suppose it can be done?....

An interesting thing to note about 4096bit RSA openPGP keys, that's what Snowden was using. His PGP Key was a 4096bit RSA signing key with a 4096bit RSA encryption subkey.

I suspect that it's because 4096 is the largest permitted RSA key on most software right now. There is no 11 on that dial.

Some allow larger keys. I once generated an 8192 bit key. It took a loooong time. Never used it. I guess that piece of software would have allowed 16k keys too.

Do you have a source regarding the technical details of the Snowden leaks? Not doubting, I'm just interested in reading about it, nothing I've seen from The Guardian/NYT/WP covered this.

Anyone know of a good tutorial for revoking and recreating your key as painlessly as possibly?

I'm sure the man page for gpg can tell you the command line incantation, but I use https://gpgtools.org/ for mac which just has "make key" and "revoke key" buttons.

The tricky bit is just to get your friends to sign your new one, you'll have to re-do all of that work

I'd imagine the process works something like this:

   * Generate the new key
   * Sign the new key with the old key
   * Generate the revocation cert for the old key
   * Push the revocation publicly with a reason of "Superseded by (fingerprint of new key)" or similar
   * Push the new key
   * Try to get your new key signed by everyone that signed your old key for authenticity's sake
I'm not too sure how the community of GPG users out there sees key revocation socially, so you may or may not want to bother with that bit. Perhaps hold off on pushing the revocation until the new key has been in use for a while?

One issue I have noticed is that people do not frequently update their keys from the key servers. Key revocation is not all that useful if you are not periodically checking for published revocations...

And 90% of the time, people want to revoke their old keys because they had a pressing one-time need to have a PGP key (vendor exchange, whatever) and lost the private key without generating and saving a revocation certificate.

This is a place where solid infrastructure support at the OS-level would really help. Instead, it's nerds and the paranoid who bother, and they'll steal the software or insist on open source anyway. :-)

If you do this, will you still be able to decrypt data encrypted with the old key?

Only with the old private key.

I know I'm still a cryptographic neophyte, but why doesn't he use four times the bits?

Sometimes longer keys are worse. I am also a newbie so I forget what that kind of attack is called.

If you find anything could you please let me know?

Someone upstream said that 4096 is the largest number of bits you're allowed to have for an RSA key right now.

I wish there were a decent hardware PGP key token available now -- something which could support 4096 RSA and communicated via (ideally) BT but also acceptable USB to a host. The GPF stick is out of stock.

I don't even know if common linux distributions even support the right combination of drivers and gnupg to even support 4096-bit RSA keys. I tried this a couple of times over the last two years or so, and there was always some bug, or it was fixed in a later version not in the repos yet.

I'm on OSX, and macports gpg does 4096 fine (gpg (GnuPG) 1.4.13). Also works fine on Ubuntu (gpg (GnuPG) 1.4.11)

What no one seems to support is ECC, but until I get more confirmation, I'm inclined to follow Schneier and stick to 4096 RSA in preference to ECC, at least where possible.

I really don't want my key to be on my general-purpose machine. I have fully airgapped machines for code signing and such, but for regular communications, I want a key which can be used from my laptops (mac, ubuntu) and ideally from iOS/Android (via bluetooth), without being exposed to compromise on the host. You can force me to sign or decrypt arbitrary stuff while my key is paired, but can't steal my key.

Ideally v2 would have some kind of log of all transactions on the secure token, so I can at least see if I'm being trolled for signatures/decrypts by malware on the host, and in v3, some kind of secure UI on the device.

I figure I could make a device like this, in a fully open/auditable design, for <$250/unit. Ideally something which looks like the Blackberry CAC reader, but uses bt 4.0 le to talk to a paired host, rather than power-hungry bluetooth.

The bug in question was the interaction between the GPF cryptostick and 4096-bit RSA keys. You could create them, put them on the stick, but trying to verify signatures had a bug. I believe it's fixed

I can confirm the German Privacy Foundation PGP stick does NOT support ECC.

You might talk to the GPF, they are currently working on their next version of their cryptostick, and it's supposed to have encrypted USB storage and the ability to support applets.

I've been using 16,384 for my SSH key sizes for the past year and am considering using 32,768 a soon as my two year rotation period is up - would there be any problems with my key sizes?

Long key generation, longer login time, possible lack of a proper entropy in the key, if the entropy source is not good.

I'm okay with waiting, but the lack of entropy worries me a tad. Would that be super common with keys of that length? Thanks for the reply!

1024 should have been save, but modern PC's can easily do 4096... So all my certificates are 4096 at the least. No reason to take a change, while there is no significant downside. So why take the risk with 1024? Maybe it's to weak, and maybe, this means we can keep our data secure for longer.

Te only reason NOT to use them: mobile end-clients, and busy server who would die from the extra CPU burden. But other then that, WHY NOT ?

I know the fundamental idea behind PGP and related technologies. My question is, if bumping his key from 2048 to 4096 bits will keep him safe until around the year 2020 (as stated by a previous reader, and from keylength.com), why not just use a 8192 bit key, or 16384 bit key and be safe for virtually your lifetime?

Does the computing cost to encrypt/decrypt make this impractical?

4096 is the largest key size gpg offers today. It was the largest key size gpg offered in 2009, which is why that's the key size I'm using now. In 1996 the largest key size pgp supported was probably 768 , which is why my first pgp key is that size. I know for sure that in 1999, the largest key I could manage to make was 2048. Looking back at those older keys, I would prefer if I could have chosen larger key sizes for them. So I suspect that in 10 years I will wish I could have used a 8196 or larger key today..

I suspect that gpg partly doesn't offer insanely large key sizes because then people like me will naively use them even if we don't need them. And perhaps partly because dealing with the math for such large numbers is harder to implement. I'd rather it offered much larger keys even if they came with warnings that it might make operations slow.

Seems that if you're really paranoid, gpg --gen-key --batch with an approptiate batch file can make 8192 or larger keys. Currently trying to generate a 81920 bit key, for general giggles and to increase my NSA rating.

Could you provide a sample batch file for this purpose? That’d be really helpful.

In 1998, the NSA required MIT to subborn their PGP. MIT publicly stated that at the time. NSA simultaneously banned the use of MIT's European confederate's version of PGP by U.S. citizens, and blocked access to that university's FTP from the U.S. Naturally, being overseas at the time, I downloaded the European version and, since it allowed creation of up-to 4096-bit keys with the option of manually-specifying non-standard lengths, I created a very large key which I saved to floppy disk.

Time complexity of RSA operations is somewhere between O(n^2) and O(n^3) with n being number of bits of modulus, so using longer keys than necessary gets impractical really fast.

It should be O(n^2). Doubling the length of the key will incur a four-fold time requirement for any single RSA operation. (I verified this a few years ago when writing an article about practical cryptosystems.)

The reason is actually quite simple. As far as I understand, the bignum libraries store the large numbers as an array of "limbs". Doing a bignum operation requires the library to iterate through the array, one limb and a time. The operations required for a single RSA calculation are effectively "run every limb in array A against every limb in array B". So you have a nested for-loop for N elements without possibility for early termination.

As for the numbers from the article: my old 400MHz box spent 20ms signing or encrypting a block of data with 1024-bit RSA key. The same operations took 80ms with a 2048-bit key.

But doesn't PGP/GPG use the pub/priv key only to encrypt/decrypt a _symmetric_ key for the encryption/decryption of the actual data?

Is he really afraid of 2^n/2 brute power of quantum computers or this is just overkilling of overkill?

There's no need for a quantum computer. Everyone should be using at least 4096bit RSA.

1024bit RSA keys can be factored with conventional non-specialized hardware (read: CPU's, not even GPU's) with GNFS.

IMHO, 2048bit RSA keys can be factored by custom hardware that the NSA has developed. I posted my reasoning for this hypothesis in other hackernews threads. A very quick/terse run down of the main key points - 1) NSA is known to use customer hardware (they have their own chip fabs. You can extrapolate performance gain from things like GPUs, FGPAs, and Deepcrack 2. Al Qaeda uses 2048bit RSA for internal communications 3. Most corps, diplomats, criminals, and normal people use 2048bit RSA either directly (SSH keys, Website Certs, VPNs) or indirectly (CA's still use 2048bit RSA certs valid until 2020)

"2. Al Qaeda uses 2048bit RSA for internal communications

3. Most corps, diplomats, criminals, and normal people use 2048bit RSA either directly (SSH keys, Website Certs, VPNs) or indirectly (CA's still use 2048bit RSA certs valid until 2020)"

I don't see how this is evidence that NSA has the ability to compromise 2048 bit keys, at will. Only that they very likely desire that ability. Math doesn't respond to desire.

That's not to say I believe they don't. Just that I can't accept two of your three premises for why one should believe they do.

Given rumor of NSA crypto breakthroughs and fact massive expenditures, it's not unreasonable to believe they've compromised a primary target.

> normal people use 2048bit RSA either directly (SSH keys, Website Certs, VPNs)

Any reason for that?

[Almost] all of my SSH and TLS (be it HTTPS or OpenVPN) keys are 4096 bits long.

I wasn't woried about TLAs with supercomputers snooping on my wires, just heard that 4096 bit RSA keys are considered more secure than 2048 while not sacrificing performance much, so I just didn't have the reason to specify lower size.

When a lot of these tools were first implemented, to get enough entropy, you would have to type and move your mouse for a long time to generate a 1024-bit key. I remember really, really hating that process.

Now, you kids these days with your entropy pools and PRNGs in your CPUs because you had an empty spot on the tape-out...get off my lawn!

    Any reason for that?

OpenSSL's default keylength is 2048? ssh-keygen uses 2048 by default.

The (public) factorization record with GNFS is 768 bits, in an effort that took about 2000 CPU years. 1024 bits is about 1000x harder, so probably within reach with government resources. 2048 bits is 10^12 times harder, which surely is out of reach for the time being time unless the NSA has a better algorithm.

Lenstra et al performed the factorization you cite, again on CPUs. Lenstra said in 2007 that he expected with in 5 years to be able to do 1024bit number - again with CPUs. 2048bit is no where near 10^12 harder if you use GPUs with larger word/op/register sizes. That's especially so with FGPAs/custom hardware with custom sized words/registers/ops. With FGPAs and custom hardware you can also locate things physically in places to give a speed advantage. I really don't think you'd need a replacement for GNFS to do a 2048bit number.

This isn't directed at you, but I wish people would stop talking about how strong crypto is if they haven't written software to break it, don't understand the mathematics, and don't understand hardware design. I just facepalm and shake my head when people post publicly that you'd have to boil the oceans to factor a 1024bit number (break a 1024bit RSA openPGP key).

10^12 is how much more work you have to do (according to the complexity estimate for the GNFS), regardless of how you do that work. If your custom-built hardware is 10^12 times faster than a general-purpose CPU, you can factor a 2048-bit number on your hardware as fast as a 768-bit number on general-purpose CPUs, but it will still take 10^12 times more work than factoring a 768-bit number on the same hardware. And I've never heard of GPUs or FPGAs giving a speedup of more than perhaps 10^4 on any problem, which still leaves a factor 10^8.

I think you're confusing complexity with difficulty. It may be 10^12 times more complex. But, if you're making your own FGPAs or chips and you can just use different word sizes or massively add more gates or processing cores. Or, even more specialized custom built super computers. The capabilities of what they're able to fabricate I think mitigate most of the protection provided by using 2048bit over 1024bit. It seems you need to be talking on the scale of at least 4096bit before the problem becomes intractable due to the limits of technology and the processing improvements that can be provided by custom hardware.

To put it in simple terms, doing operations with a 32bit numbers and a 64bit numbers have a calculable complexity gap. But, that gap means very different things in terms of difficulty on a 8bit microprocessor vrs a 32bit microprocessor vrs a 64bit microprocessor.

IIRC some piece of news surrounding all this stated that the NSA considers anyone using cryptography to protect their communications to be people of interest, whose dragnetted correspondence will be stored indefinitely. I'm curious if the NSA prioritizes these people according to how strong their keys are. I imagine that the logic would be that someone using 4096bit keys is either paranoid or really really has something to hide.

It's more of a sign that the user is au fait with computers. The sort of person with a 4096-key probably looks at the advanced settings to find the option, or carefully reads the manual.

I think it is a bit of a stretch to say that 2048 bit keys can be factored with NSA resources unless the NSA also has a more efficient algorithm than GNFS. However, it is also good to protect against potential future improvements over GNFS, so 3072 or 4096 bit RSA is still probably a good idea.

The complexity is 2^n/2, but we don't know (afaik) if the quantum computer is also way faster for a single computation than standard computers. In that case, the more the better

He is.

Or... he's playing it safe, since he doesn't know what he doesn't know.

Yet he prefers aes-256 over twofish-256 or camellia-256.

Source? And even if he does, how is that a problem? AES-256 has been independently tested and reviewed. Whether or not the NSA forced the NIST [1] to adopt it as a symmetric encryption standard is irrelevant.

[1]: http://blog.cryptographyengineering.com/2013/09/on-nsa.html

There is nothing suspicious with that.

He has worked previously in mostly corporate and private context, so 2048 is just fine. Now he works with people and data NSA wants their hands on and he wants the data to be secure also in the future. It's just reasonable to move to 4096 key sizes.


>Dr Lenstra and Dr Verheul offer their recommendations for keylengths. In their calculation, a 2048 bit key should keep your secrets safe at least until 2020 against very highly funded and knowledgeable adversaries (i.e. you have the NSA working against you). Against lesser adversaries such as mere multinationals your secret should be safe against bruteforce cryptoanalysis much longer, even with 1024 bit keys.

See also: http://www.keylength.com

Your secrets are not safe against multinational corporations with 1024 bit keys. The likely cost of the capability to break a 1024 bit key is probably (for a private entity) in the low tens of millions. You wouldn't even be safe from the operators of HN with that margin of security.

Now you've got me imagining what kind of data / communications would be deemed valuable enough to someone to make that kind of monetary expenditure worthwhile.

that's why we are talking 2048 vs. 4096.

I think he's referring to the trailing sentence that you quoted which mentioned 1024-bit keys being safe against multinationals.

Read the guardian article.

He is using Windows.

Well mine is 4097 bits. Checkmate, Sir.

Your goes to 0x1001.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact