- he forgot his password
- he lost his private key
- he knows more than he can tell us
- he really doesn't use his PGP key all that often, had the same one for 16 years on god knows how many computers, and decided that if he's going to generate a new one, he might as well send a message with it.
- Snowden et al asked him to create a new keypair with very strict cleanroom practices to be sure as to the sanctity of the private key. It's trivial to bump up the key length when doing so.
: See the August 18 NYT article on Laura Poitras and Glenn Greenwald for a peek at the "operational security" that Snowden demanded. http://www.nytimes.com/2013/08/18/magazine/laura-poitras-sno...
(and to be clear, my intent is not to bait, i'm actually curious)
the other trick is naming thing: why would such a vague name like "Advanced Encryption Standard" designate specifically symmetric cipher?
It's only in a very few cases where there's an advantage in signing a document, and it's usually more in the verifiability of content (so that you can verify that nothing is lost/changed in transit) than in the verification of identity.
Given the lack of adoption of PGP/GnuPG in email clients vs. S/MIME, if I'm signing my emails without encrypting them, chances are the recipient would still be able to read my emails and, knowing my writing style and given the context, be able to suss out that I was in fact the author.
I use the word "paranoia" intentionally because there's a lack of meaningful legal precedent establishing that a gpg-signed message is enough to establish authorship. In a civil case, sure, it looks bad, but you could easily say,"oops, I stored my public key on [vps or cloud service], which was a well-known victim of a hack."
Outside of those few work examples, I don't think I've ever sent or received a PGP encrypted message. In fact, the only signed messages I think I've ever seen were mailing list messages from people who signed all their messages.
(Note: FWIW, the email address in your profile doesn't provide a public key from pgp.mit.edu – I would have added you to my keychain and sent you a "Hi! Isn't it nice to introduce yourself without the NSA listening in!" email…)
I don't think I even have a key for my current email address.
If his computers have ever been compromised, his PGP private key would likely be as well. Considering all the news about US gov spending millions on rootkits, it was likely a wise choice to generate a new one in case a previous one was likely to have been compromised.
I'd guess Schiener would be a target of interest by some state (as is anyone working with encryption it seems).
Bruce generally does not sign/encrypt his email because he views email as a low-security communications mechanism anyway.
He generally advises a rational risk assessment when determining how much security to apply to a process. He often uses the example of locking doors on your house, etc.
In re PGP, he's been critical of a number of shortcomings in PGP and GnuPG since the beginning, but by the same token, one of his first hires at Counterpane was Jon Callas.
I think I've received maybe one message encrypted to me. Everything else I use PGP for is to send signed emails to mailing lists.
Edit: I am located in the North Eastern part of the US.
Edit 2: perhaps we need a geolocation aware social network a la Square but just for notifying you of other nearby PGP users...
I think we can now surmise one big reason why they didn't.
(tptacek will say that web-based PGP is the wrong way to go because it's too insecure: fact is it's still way more secure than sending cleartext emails, and in any case the point of it is to bootstrap adoption and hopefully trigger an email "arms race")
 Gmail even began with some support for PGP signature verification http://googlesystem.blogspot.com/2009/02/gmail-tests-pgp-sig... ... and then stopped. Anyone on the inside know why?
However, I think we could use a better way to associate emails with PGP keys. For example, my email uses a domain I own. I have HTTPS on my domain, so I can deliver my public key to you securely over the Web. Alternatively, Google could have a service where you securely ask it "what is the key for email@example.com" and it responds with the key. Bootstrapping a full WoT is hard, but often times the requirements are much less strict. Most times I just want to know that if I email you that you are the only one that can read the email. I might not even care if you are truly the person you claim to be, as I know you by your email handle more than by your real name (as in emailing Satoshi Nakamoto). Of course having a webmail provider tell others what your public key is means you have to trust your webmail provider not to lie. You also have to trust their delivery mechanism (HTTPS) and the people that issue them their SSL certificates (as we know this can be circumvented by motivated governments).
I understand why GMail wouldn't want to support PGP. They read your emails to target ads at you. Without that there would be no GMail. If you encrypt everything you send/receive and GMail cannot read it, then they have no way to monetize it.
1) Security of email messages in transit, assurance that you're receiving emails from the person who claims to be sending them, etc.
2) Preventing your email service provider (and any MITM) from reading your emails.
These concerns are relatively independent of each other. While if you're a die-hard PGP advocate you'll want both 1 and 2, PGP-in-Gmail gives us 1, and that's a pretty great start.
Right now we have neither, and we're all much poorer for it.
PGP-in-Gmail instantly gives it to everyone that has an @gmail account (and anyone else who has signed up to a WoT). It would probably insist that you use two-key authentication. And it would work like this
0) Every message you send is automatically signed by you by default.
1) type in >=1 email addresses in To: bar
2) If all of the email addresses you sign have public keys associated with them that Gmail can locate, they all look "Green" (or whatever) and the [Send] button becomes [Send securely].
3) If any of the addresses doesn't have a public key associated with it, then nothing is encrypted (i.e., exactly the behavior we have now).
If you don't trust Gmail, you shouldn't trust it any less if/when they deploy PGP for it. And no one is saying you have to use it. All I'm saying is that Gmail, Hotmail, etc. seem to be the best equipped to trigger the widespread adoption of PGP.
The problem here might be that people (including Google, I guess) don't want users to trust anything MORE THAN THEY SHOULD, which is a major risk in a case like this. Sometimes security features can be counterproductive since they can lead to the users making bad assumptions and therefore bad decisions that they otherwise wouldn't have made. PGP in webmail implemented just in JS is likely one of these things that could make things worse due to how users treat them.
There is the PGP global directory which does essentially that (i.e., given an email address provides a key for with the email address verified by sending an email there... which is as good as what Google would be able to provide).
Doesn't seem to get much more use either way: I don't think the problem with driving up PGP adoption is the distribution of public keys.
If you're near a university or work somewhere in tech you'll probably be able to find a DD within a degree or two in your network.
To what end though? What do you share in common with them other than the fact that you're both probably interested in cryptography? Just because you can easily communicate back and forth with encrypted messages doesn't mean you'll actually have much to talk about.
Like, signatures on my key for "controls the hn:rdl account", "controls firstname.lastname@example.org email address" etc. With dates, so I can accumulate multiple signatures over time.
I'd trust a key from someone with 14 years of "controlled ... email address" signatures on it more than someone showing me a plastic ID card in a bar.
I get a couple requests a year when someone comes through town. It could do with more participation, though. :-)
> 3) Assume that while your computer can be compromised, it would take work and risk on the part of the NSA – so it probably isn't. If you have something really important, use an air gap. Since I started working with the Snowden documents, I bought a new computer that has never been connected to the internet. If I want to transfer a file, I encrypt the file on the secure computer and walk it over to my internet computer, using a USB stick. To decrypt something, I reverse the process. This might not be bulletproof, but it's pretty good.
I've got the seeds of an idea which has been kicking round my head for a few weeks now - a rasbperrypi (or similar) with GPG installed on it, which is connected to my main computer as a usb device (possibly impersonating a usb keyboard). The 'pi could be sent encrypted data over the usb/serial connection, and send back the plaintext. The 'pi would have no network connection – reducing the attack surface for someone trying to extract my private key remotely to some exploit that'd work over a tightly constrained serial connection. Sort of like a RSA SecureID on steroids - here's a device with a "cryptographically secure secret", but instead of just displaying TOTP tokens, you can feed it encrypted data and have it send back cleartext (optionally with a keypad and PIN/passcode required, but it's not "secure" against physical access, so I'm probably not going to try and implement that...).
Support is already integrated with GnuPG, they are specifically designed to prevent key material leaking, and they have some other nice properties (like self-destruction after three incorrect admin PIN attempts).
Smartcards are cheap, anyway: http://shop.kernelconcepts.de/product_info.php?cPath=1_26&pr...
You could get all stuxnet and exploit the various applications (such as the components that inspect zipped content), but the transport itself is a simple file copy over a bitstream. You could do the same thing with kermit and uuencode a bit more easily.
I've heard of using serial lines/modems with the appropriate tx->rx cut, but I don't know if it would actually work for ethernet (maybe 10BaseT only?)
Half-duplex, 100mbit, ignore link. I suppose it can be done?....
The tricky bit is just to get your friends to sign your new one, you'll have to re-do all of that work
* Generate the new key
* Sign the new key with the old key
* Generate the revocation cert for the old key
* Push the revocation publicly with a reason of "Superseded by (fingerprint of new key)" or similar
* Push the new key
* Try to get your new key signed by everyone that signed your old key for authenticity's sake
This is a place where solid infrastructure support at the OS-level would really help. Instead, it's nerds and the paranoid who bother, and they'll steal the software or insist on open source anyway. :-)
What no one seems to support is ECC, but until I get more confirmation, I'm inclined to follow Schneier and stick to 4096 RSA in preference to ECC, at least where possible.
I really don't want my key to be on my general-purpose machine. I have fully airgapped machines for code signing and such, but for regular communications, I want a key which can be used from my laptops (mac, ubuntu) and ideally from iOS/Android (via bluetooth), without being exposed to compromise on the host. You can force me to sign or decrypt arbitrary stuff while my key is paired, but can't steal my key.
Ideally v2 would have some kind of log of all transactions on the secure token, so I can at least see if I'm being trolled for signatures/decrypts by malware on the host, and in v3, some kind of secure UI on the device.
I figure I could make a device like this, in a fully open/auditable design, for <$250/unit. Ideally something which looks like the Blackberry CAC reader, but uses bt 4.0 le to talk to a paired host, rather than power-hungry bluetooth.
I can confirm the German Privacy Foundation PGP stick does NOT support ECC.
You might talk to the GPF, they are currently working on their next version of their cryptostick, and it's supposed to have encrypted USB storage and the ability to support applets.
Te only reason NOT to use them: mobile end-clients, and busy server who would die from the extra CPU burden. But other then that, WHY NOT ?
Does the computing cost to encrypt/decrypt make this impractical?
I suspect that gpg partly doesn't offer insanely large key sizes because then people like me will naively use them even if we don't need them. And perhaps partly because dealing with the math for such large numbers is harder to implement. I'd rather it offered much larger keys even if they came with warnings that it might make operations slow.
The reason is actually quite simple. As far as I understand, the bignum libraries store the large numbers as an array of "limbs". Doing a bignum operation requires the library to iterate through the array, one limb and a time. The operations required for a single RSA calculation are effectively "run every limb in array A against every limb in array B". So you have a nested for-loop for N elements without possibility for early termination.
As for the numbers from the article: my old 400MHz box spent 20ms signing or encrypting a block of data with 1024-bit RSA key. The same operations took 80ms with a 2048-bit key.
1024bit RSA keys can be factored with conventional non-specialized hardware (read: CPU's, not even GPU's) with GNFS.
IMHO, 2048bit RSA keys can be factored by custom hardware that the NSA has developed. I posted my reasoning for this hypothesis in other hackernews threads. A very quick/terse run down of the main key points - 1) NSA is known to use customer hardware (they have their own chip fabs. You can extrapolate performance gain from things like GPUs, FGPAs, and Deepcrack 2. Al Qaeda uses 2048bit RSA for internal communications 3. Most corps, diplomats, criminals, and normal people use 2048bit RSA either directly (SSH keys, Website Certs, VPNs) or indirectly (CA's still use 2048bit RSA certs valid until 2020)
3. Most corps, diplomats, criminals, and normal people use 2048bit RSA either directly (SSH keys, Website Certs, VPNs) or indirectly (CA's still use 2048bit RSA certs valid until 2020)"
I don't see how this is evidence that NSA has the ability to compromise 2048 bit keys, at will. Only that they very likely desire that ability. Math doesn't respond to desire.
That's not to say I believe they don't. Just that I can't accept two of your three premises for why one should believe they do.
Any reason for that?
[Almost] all of my SSH and TLS (be it HTTPS or OpenVPN) keys are 4096 bits long.
I wasn't woried about TLAs with supercomputers snooping on my wires, just heard that 4096 bit RSA keys are considered more secure than 2048 while not sacrificing performance much, so I just didn't have the reason to specify lower size.
Now, you kids these days with your entropy pools and PRNGs in your CPUs because you had an empty spot on the tape-out...get off my lawn!
Any reason for that?
This isn't directed at you, but I wish people would stop talking about how strong crypto is if they haven't written software to break it, don't understand the mathematics, and don't understand hardware design. I just facepalm and shake my head when people post publicly that you'd have to boil the oceans to factor a 1024bit number (break a 1024bit RSA openPGP key).
To put it in simple terms, doing operations with a 32bit numbers and a 64bit numbers have a calculable complexity gap. But, that gap means very different things in terms of difficulty on a 8bit microprocessor vrs a 32bit microprocessor vrs a 64bit microprocessor.
He has worked previously in mostly corporate and private context, so 2048 is just fine. Now he works with people and data NSA wants their hands on and he wants the data to be secure also in the future. It's just reasonable to move to 4096 key sizes.
>Dr Lenstra and Dr Verheul offer their recommendations for keylengths. In their calculation, a 2048 bit key should keep your secrets safe at least until 2020 against very highly funded and knowledgeable adversaries (i.e. you have the NSA working against you). Against lesser adversaries such as mere multinationals your secret should be safe against bruteforce cryptoanalysis much longer, even with 1024 bit keys.
See also: http://www.keylength.com
He is using Windows.