Hacker Newsnew | comments | show | ask | jobs | submit | agwa's comments login

What do you (and pbsd) think of the site's recommendation to use custom 2048-bit parameters as opposed to a well-known 2048-bit group such as group 14 from RFC3526? Is it really that likely a nation-level adversary could break 2048-bit FFDHE the same way they've probably broken the 1024-bit group 2? How does that weigh against the risk of implementation errors generating your own parameters, or the risk of choosing a group with a currently-unknown weakness?

Group 14 is fine. You might as well also tell people to use custom block ciphers, since a precomputation of roughly the same magnitude---compute a common plaintext under many possible keys---would break AES-128 pretty quickly as well.

I would say use a custom group if you must stick with 1024-bit groups for some reason. Otherwise, use a vetted 2048+-bit group. If---or when---2048-bit discrete logs over a prime field are broken (and by broken I mean once, regardless of precomputation), it will likely be due to some algorithmic advance, in which case DHE is essentially done for. If nation states have been able to pull that off already, then it's pointless to even recommend anything related to DHE in the first place.


Seconded. Group 14 (2048-bit, ≈112-bit workfactor) or another safe 2048-bit or greater prime (such as ffdhe2048, or ffdhe3072 @ ≈128-bit workfactor) will do fine for now. You don't need to roll your own safe primes. As per the paper: "When primes are of sufficient strength, there seems to be no disadvantage to reusing them."

The problem with reusing them is of course when they're not strong enough, and so if an adversary can pop one, they can get a lot of traffic - and as I've said for a while and as the paper makes clear, 1024-bit and below are definitely not strong enough. Anything below 2048-bit would be a bit suspect at this point (which is precisely why the TLS Working Group rejected including any primes in the ffdhe draft smaller than that - even though a couple of people were arguing for them!).

If you're still needing to use 1024-bit DH, DSA or RSA for anything at all, and you can't use larger or switch to ECC for any reason, I feel you have a Big Problem looming you need to get to fixing. Custom DH groups will not buy you long enough time to ignore it - get a plan in place to replace it now. We thought 1024-bit was erring on the small side in the 1990s!

I concur that the NSA's attack on VPN looks like an operational finite-field DH break - I didn't realise that two-thirds of IKE out there would still negotiate Oakley 1 (768) and 2 (1024), but I suppose I didn't account for IKE hardware! Ouch!

Their attacks on TLS are, though also passive, architected far more simply and more suggestive of an RC4 break to me as there seems to be no backend HPC needed - ciphertext goes in, plaintext comes out. Both are realistic attacks, I feel, but RC4 would have been far more common in TLS at the time than 1024-bit DHE, and although 1024-bit RSA would be present many likely sites would have been using 2048-bit, so naturally they'd go for the easiest attack available. (That gives us a loose upper bound for how hard it is to break RC4: easier than this!) I also don't think the CRYPTO group at GCHQ would have described this as a "surprising […] cryptologic advance" from NSA, but just an (entirely-predictable) computational advance, and (again) lots of people in practice relying on crypto that really should have been phased out at least a decade ago. So there's probably more to come on that front.

Best current practice: Forget DHE, use ECDHE with secp256r1 instead (≈128-bit workfactor, much faster, no index calculus). You can probably do that today with just about everything (except perhaps Java). It will be faster, and safer. And, we know of nothing wrong with NIST P-256 at this point, despite its murky origins.

Looking forward, Curve25519 (≈128-bit) and Ed448-Goldilocks (≈222-bit) are, of course, even better still as the algorithms are more foolproof and they are "rigid" with no doubts about where they come from (and in Curve25519's case, it's even faster still). CFRG is working on recommending those for TLS and wider standardisation. You can use 25519 in the latest versions of OpenSSH right now, and you should if you can.


They collected and stored data unnecessarily, so former management absolutely shares blame.

No, because in elliptic curve Diffie-Hellman, the private key isn't used to invert anything, as opposed to RSA (a true example of a trapdoor), where it is.

Well... it's used to invert the scalar multiplication and compute the discrete logarithm - with a notation similar to other comments:

  Easy: given int n, point P -> compute Q = nP

  Hard: given points P, Q (known to be nP for some n) -> compute n
This said, similarly as RSA vs factorization, DHP vs DLP (and other problems) are only assumed to be equivalent, meaning that one could find an easy way to break DH without computing the DLP.

While the equivalence between RSA and integer factorization is still an open question, the Rabin (exponent 2) trapdoor permutation is tightly equivalent to factoring.

Furthermore, for most groups the DHP is polynomially equivalent to the DLP. The requirement for this to be true is that there exists an elliptic curve with smooth order modulo the Diffie-Hellman group's order. Such smooth-order curves are hard to actually find for large groups, exponentially so (this is a fine example of the chasm between uniform and nonuniform reductions); but for elliptic curves groups used in practice, it is possible to find them. In other words, an easy way to break the DHP in smallish elliptic curve groups would lead to ECDLP solving with only polynomial overhead.


Thanks for this. I was unclear on this point. So looking at the discrete log problem vs a trapdoor function:

    Discrete log: for f(x) = y
     - Easy: given f and x find y.
     - Hard: given f and y find x.

    Trapdoor: for f(x) = y
     - Easy: given f and y find x, given a secret, e.g. (p-1)(q-1) in RSA.
     - Hard: given f and y find x, without possesion of the secret.
Is that accurate, or have I misstated the essential difference somehow?

Yes, that's essentially the difference between a one-way function and a trapdoor function.

I'm sure you knew this already, but for completeness, another property of the trapdoor function is:

  - Easy: given f and x find y.

When I try to import HPA's key from the public key servers, I get an "invalid subkey binding" error and the weak sub key isn't imported. That error means that the sub key isn't properly signed by HPA's master key, so there is no cryptographic proof that this weak sub key actually belongs to HPA. This looks more like a fake sub key that someone tried to pollute the public key servers with, which isn't really an issue because PGP implementations will just ignore it.

  gpg --verbose --keyserver hkp://hkps.pool.sks-keyservers.net --recv-key 0xbda06085493bace4
  gpg: requesting key 0xBDA06085493BACE4 from hkp server hkps.pool.sks-keyservers.net
  gpg: armor header: Version: SKS 1.1.5
  gpg: armor header: Comment: Hostname: keyserver.witopia.net
  gpg: pub  4096R/0xBDA06085493BACE4 2011-09-22  H. Peter Anvin <hpa@infradead.org>
  gpg: key 0xBDA06085493BACE4: invalid subkey binding
  gpg: key 0xBDA06085493BACE4: skipped subkey
  gpg: key 0xBDA06085493BACE4: "H. Peter Anvin (hpa) <hpa@zytor.com>" not changed
  gpg: Total number processed: 1
  gpg:              unchanged: 1

I think you may have solved the mystery, including my confusion about why I couldn't get the vulnerable subkey from the keyservers. My gpg was silently discarding the vulnerable subkey because it doesn't have a proper signature.

If this is the explanation, then this is either an attack by a random person or an attack or flaw in a keyserver, but an attack that's unlikely to work because users will discard the bad key rather than using it.


The keyservers aren't secure anyway. The are more like a big public walls on which everybody can write any number.

The users are the ones responsible for any key verification.


Yes! It looks like someone inserted a broken subkey with an invalid signature into the keyserver. If your software didn't validate subkey signatures, you could very well think that a package was signed by HPA. Alternatively, it could be that someone was just fucking around and uploaded a subkey with invalid signature for the lolz.

Here's a json export of the packets: https://gist.github.com/anonymous/ba23ca66d2ca249e6f84

Here's the factored subkey: https://gist.github.com/anonymous/ba23ca66d2ca249e6f84#file-...

Here's the factored subkey's bad signature: https://gist.github.com/anonymous/ba23ca66d2ca249e6f84#file-...

EDIT: It's the EXACT SAME subkey self-signature packet as HPA's real subkey self-signature packet! Someone (by malice or mistake) manually added a subkey to HPA's public key and copied the signature from the other subkey directly onto the new subkey.

These are the same:

Bad subkey self-signature: https://gist.github.com/anonymous/ba23ca66d2ca249e6f84#file-...

Good subkey self-signature: https://gist.github.com/anonymous/ba23ca66d2ca249e6f84#file-...


That JSON export is cool - how did you generate it?

One of my side projects: https://github.com/diafygi/openpgp-python

Not fully functional yet, but was able to convert this public key to json. I manually removed the non-self signature packets for the gist.


    This looks more like a fake sub key that someone tried to pollute the public key servers with
Does anybody know how that would be possible? I can't understand why a key server would accept a subkey unless it was correctly signed by the primary key. At the moment, all I can think is:

1. Misbehavior on the part of someone running a keyserver.

2. A bug in the keyserver software

    which isn't really an issue because PGP implementations will just ignore it
Has that always been the case? With all widely used PGP implementations?

I ask both of the questions because I can't understand why anybody would go to the trouble of doing this unless it supports some kind of attack (which may no longer be viable, but perhaps was at some time in the past).


The SKS keyserver pool (which keys.gnupg.net alias's to) doesn't do any cryptographic verification, even verifying self-signatures, before upload. The software just checks to see if the format is valid.

It's up to the clients to do their own verification, which in this case GPG does perfectly (it doesn't import the invalid subkey since the self-signature is invalid).


If it's true it's a nice DoS vector

fwiw this is currently being discussed on the sks mailing list, but the overwhelming opinon seems to be that the current behaviour should stay. https://lists.nongnu.org/archive/html/sks-devel/2015-05/msg0...

There are still many clients out there that do not support AE ciphersuites (e.g. literally every version of Safari). If SSL Labs capped the grade at B for supporting these clients, then the top grade at SSL Labs would effectively become a B, as no serious site would get an A grade anymore. It wouldn't actually accelerate the transition away from these clients/ciphers.

Making the effective top grade a B might also reduce the physiological motivation for improving a seriously bad TLS config. At least in the US, anything less than an A is considered very close to failing these days (it's silly, but it's the reality). Making changes just to get a B isn't nearly as motivational as making changes to get an A.


Those older clients are actually vulnerable to real cryptographic flaws in TLS.

Which flaw, specifically, would the latest version of Safari be vulnerable to?

I guess it depends on your confidence in the Lucky 13 patch, right?

Indeed.

And to be clear, I'm not saying we shouldn't be moving away from the CBC ciphers as fast as possible - just that dishing out a B grade for supporting a large number of users is not a good way of accomplishing that.


You and tptacek are both right, so how about the following to combine your positions:

* Grade 'A' requires AES-GCM and ChaCha20Poly1305 * Allow CBC for 'A' iff it's a last resort option * Use nginx equivalent of "ssl_prefer_server_ciphers on;"

CBC would then only be used by clients that don't support AE ciphersuites.


Indeed. Debian is one OS which is affected by this, because they shipped an outdated version of NSS (the crypto library used by Chrome) which does the suboptimal path generation.

Yesterday, the latest version of NSS was finally uploaded to Debian Unstable, fixing the problem there, but Debian Stable is still affected, and will be until it's updated either through a security update or a stable point release. I plan to agitate for this if necessary.

Here's the relevant Debian bug report: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=774195


> There are no known security vulnerabilities with HTTP/1.x's style of compression That's because HTTP/1.x only supports compressing response bodies

You are completely wrong. The BREACH attack showed that you can be vulnerable to a compression oracle by merely compressing response bodies: http://breachattack.com/

> For the love of god, keep using compression with your websites, whether HTTP/1.x, TLS + HTTP/1.x, or HTTP/2

You can continue compressing files such as CSS that don't reflect any attacker-controlled content, but compression of dynamic HTML pages is likely to be insecure.


It's quite depressing that we can't have both compression and encryption at the same time - text is very compressible - particularly HTML markup...

Theoretically, you can, as long as any secrets within the response body are represented as a "literal" (that is, uncompressed) within the compressed stream.

In practice, there isn't a way I know of to mark the relevant parts of the response body so that the compression, which is usually done as a separate post-processing step, will know to avoid compressing these parts of the stream. And it would be very fragile; forgetting to mark a secret as "do not compress this" would work perfectly fine but reintroduce the vulnerability.


Simply enough in HTTP2: lift out the relevant parts of the response body—turn them into separate resources and refer to them in the original object by their URL. When the client then requests those dependent objects, deliver them uncompressed. (And server-hint/push the dependent objects if you can, of course.)

Could the order be reversed, encrypting first then compressing second?

So only compress when sending over the wire?


You could, but you wouldn't compress much.

Encrypted data is supposed to look random, and as a consequence, it tends to compress very poorly.


Interesting. I was afraid that'd be the answer. Thanks for the reply.

You can with HTTP/2.

so I have gzip compression specified in my nginx config and I use SSL for everything.

Is my server vulnerable to this?


Your app is vulnerable if it includes content from the user (e.g. a GET query parameter or something from a POST request body) in the response, and includes secret info (e.g. an anti-CSRF token) in that same response.

Wow, that's scary. I imagine quite a lot of people are unaware of this.

I personally use SSL on my personal site just because I like the idea that readers can be sure the content is what I've sent, rather than because the information is private / sensitive. So I'm not personally concerned because this doesn't allow anyone to MITM the connection, just read it.


I suppose that adding a small random bit into the page could defeat this known-plaintext attack?

Not really. The theory behind the attack is that if the user-specified content is equal to the secret content, it will compress more effectively and have a smaller content length. The other content on the page doesn't really matter.

Adding data of random length will make the attack more difficult, but won't defeat it entirely. This is called length hiding.

See http://breachattack.com/ for a more thorough explanation by smarter people.


The attack depends on a secret to match a user-provided string enough for the compression algorithm to notice this redundancy?

A counter-measure then would be to scramble the user input in the page with a random key, and include this key into the page for de-scrambling using JavaScript. It will still efficiently compress the parts of the page which are not user-controlled.


Yes and Yes (assuming your random key isn't guessable).

breachattack.com suggests masking the secret with a per-request random key, but masking the user input would work too.


> de-scrambling using JavaScript

No thank you.


ahhhh! Thank you I had completely forgotten about BREACH and was fixated on CRIME

Indeed, and they have been a source of some nasty security vulnerabilities in the past. If you're using libcurl, it's a very good idea to restrict redirects to HTTP and HTTPS only (with CURLOPT_REDIR_PROTOCOLS), lest a malicious site redirect you to an imap or smtp URL.

-----


There are significant downsides to technically prohibiting caller ID spoofing. First, you would no longer be able to preserve caller ID when forwarding calls. Consider how much less useful Google Voice would have been if you couldn't see the caller ID of calls forwarded to you. Second, in VoIP, outgoing calls (termination) and incoming calls (origination) are completely decoupled services. This is really nice because it enhances reliability (you can fail over to different termination provider) and reduces costs (you can route outgoing calls to the cheapest provider depending on the destination). A BCP38-style system would require you to purchase your termination from the same provider as your origination in order to have outgoing caller ID.

-----


I think the idea is that caller ID forwarding should be implemented in some authenticated or verifiable way. I hesitate to call credential forwarding a solved problem, but surely a modern protocol design could do a lot better than the current free-for-all, I-am-who-I-say-I-am system.

-----


Maybe something like DMARC but for phone numbers, you should be able to set your phone number to be rejected if it is spoofed.

-----


As a Google Voice user, the downside would be sad but totally worth it if it meant I could have GV accurately screen my calls. I could still get full caller ID information for GV calls received through VoIP, and when receiving calls through POTS I could at least know that the call is from somebody I'm willing to take calls from.

The other use case applies pretty much only to people I don't want to hear from in the first place, so I have no sympathy for how it would hurt their bottom line.

-----


It sounds like the list of people you're willing to receive calls from is pretty small. Have you ever actually received a phone call which spoofed the caller ID of someone you know? I haven't - all the spoofed calls I receive are from numbers I usually wouldn't answer anyways since I don't recognize them.

Also, note that Google Voice no doubt uses many termination providers (considering they're giving the service away for free, they'll need some pretty sophisticated routing to take advantage of cheap rates), so the second use case applies to any Google Voice user.

-----


Why can't we have a system whereby Google is required to prove to each of their termination providers that they legitimately own and control my GV number, and thus all of those providers can assure my outgoing GV calls have valid caller ID data? I don't see where it should matter whether calls into my number take a potentially different route from calls from my number.

-----


> Yes, they should be free to express their opinions. But free speech doesn't mean freedom from consequences.

That's bad analysis. Free speech means freedom from government-imposed consequences, and since their employer is the government, being fired would be a government-imposed consequence. (For the same reason, public universities cannot discipline students for their protected speech.) Rather, the correct reason these cops should be fired is because they legally do not have the right of free speech when in uniform - see my other comment up-thread.

-----


Fair enough, my analysis wasn't sufficiently specific. I agree that gov't employees should be free from government-imposed consequences if they are critizing policy. In contrast directly disregarding one's boss, while working in an official capacity, should not necessarily be free from government-imposed consequences (i.e., that should be a sanctionable offense). Obviously there's a divding line betwen the two, so it depends where that line is drawn (and where one places the actions of those officers on that spectrum).

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: