Hacker News new | past | comments | ask | show | jobs | submit login
Nginx doesn't suck at SSL after all (matt.io)
288 points by seiji on July 13, 2011 | hide | past | web | favorite | 107 comments



In case you're wondering what "Perfect Forward Secrecy" is: SSL/TLS, like most protocols, uses (expensive, dangerous) RSA to exchange (cheap, simple) AES or RC4 session keys; bulk data is encrypted with session key.

In the normal protocol, if you lose the RSA key, an attacker can retroactively decrypt the session keys, which are protected only by that same RSA key.

In ephemeral DH mode, instead of encrypting a session key with RSA, both sides run the Diffie Hellman protocol to exchange a key†. DH allows two unrelated parties who share no secrets to exchange a secret in public; it's kind of magical. But it's also trivial to man-in-the-middle. To get around that problem, ephemeral Diffie Hellman mode in SSL/TLS signs the DH exchange with the RSA key.

The win here is that losing the RSA key now only allows you to MITM future SSL/TLS connections. This is still a disaster, but it does not allow you to retroactively unwind previous DH exchanges and decrypt earlier captured sessions.

DH is unbelievably simple; go read the Wikipedia page.


Diffie-Hellman and Boyer-Moore are two algos that completely blew me away with their simplicity when I first encountered some theory on them.

http://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exch...

http://en.wikipedia.org/wiki/Boyer%E2%80%93Moore_string_sear...


It could be a really nice blog post. A lot of people get lost on wikipedia pages like that.


"Ephemeral Diffie Hellman"

It sounds magical too.


From the article, to find out what your website is doing:

openssl s_client -host HOSTNAME -port 443

I ran this for my own website and a few bigger websites

  openssl s_client -host www.gusta.com -port 443 (My site, hosted on Heroku)
  Cipher    : DHE-RSA-AES256-SHA
  
  openssl s_client -host www.google.com -port 443  
  Cipher    : RC4-SHA 
  
  openssl s_client -host www.airbnb.com -port 443  
  Cipher    : AES256-SHA
  
  openssl s_client -host www.facebook.com -port 443
  Cipher    : RC4-MD5
  
  openssl s_client -host www.paypal.com -port 443
  Cipher    : AES256-SHA

  openssl s_client -host www.amazon.com -port 443
  Cipher    : RC4-MD5


Presumably Amazon, Facebook and Google are using RC4 for speed reasons, though it's not really thought to be secure anymore.


RC4 is fine in SSL/TLS.

Nobody likes it, but until relatively recently, the AES ciphersuites were all CBC mode, which means they burned a couple bytes of padding for every record.

RC4 is also faster than AES, which was, until very recently, an issue for server performance.

We have AES-CTR ciphersuites now, but I'm not sure how widely deployed they are.


The padding for AES cipher suites isn't a significant performance issue. The per-record IV in TLS 1.1 and later, the overhead of the tls-cbc.txt workaround for earlier versions, and/or the extra block(s) of encryption required for HMAC-SHA-2-based cipher suites are bigger performance issues.

RC4 is faster than AES on older processors. On the current generation of Intel server processors, AES is faster because of AES-NI.

AES-GCM cipher suites should be even faster yet on those processors, but they are not widely implemented. Also, I wouldn't be surprised to see more interesting papers being published about weaknesses in the GMAC function in the near future. (Note: I don't have any knowledge of upcoming papers; I am guessing there will be some based on the recent paper regarding the discovery of unexpected weak keys in GMAC.) Nonetheless, I suspect that NSS will implement them because they are part of the NSA Suite B profile for TLS and some NSS developers want a complete implementation of that.


I was under the impression that RC4-MD5 was no longer recommended but RC4-SHA was okay (comparatively). Is this incorrect?


The hash algorithm in TLS is HMAC-hash. Some uses of MD5 like secret suffix are now insecure (which is why usage of MD5 for certificate signing ended long ago), but HMAC is not one of them.


While I would certainly prefer to see the major players leading the way in adoption of AES by default, RC4-MD5 persists for at least 2 reasons:

1) Habit

2) As implemented/deployed in SSL, it still provides some security

RC4 has gotten a bad reputation in large part because of its poor application in WEP that resulted in keys being rapidly recovered by sniffing traffic. The Wikipedia entry is a good place to start http://en.wikipedia.org/wiki/RC4#Security (& numerous references for the original papers/pubs cracking various bits of RC4). The RSA response to RC4 concerns (from WEP) is worth reading, as well http://www.rsa.com/rsalabs/node.asp?id=2009 .


I just did a quick dig to see if browsers now allow the user to specify a preferred cipher but it looks like things have not progressed. Chrome for instance has marked a request as WONTFIX.

http://code.google.com/p/chromium/issues/detail?id=58833


RC4 has known weaknesses, but it's still extremely difficult to crack when implemented correctly. Attackers want to go for your weakest point, which is almost certainly not RC4; typically it's far more difficult to crack RC4 than to steal your data by finding a SQL injection or buffer overflow, or simply breaking into your building.


Is that so?

openssl s_client -host online.citibank.com -port 443

Cipher : RC4-MD5

openssl s_client -host www.bankofamerica.com -port 443

Cipher : RC4-MD5


Banks are unfortunately poster children for what not to do in this space, generally. The default cipher for google.com is RC4-SHA, and I can, if so inclined, force negotiation of AES-based ciphers by client config. Not so with Citibank (RC4-MD5, DES-CBC3-SHA, or DES-CBC-SHA) & BofA is only marginally better (RC4-MD5, RC4-SHA, AES128-SHA). To their credit, they are using 2048 bit RSA keys with short lifetimes and they have significantly improved their configurations from a couple of years ago when single DES defaults and 40-bit RC4 were all too common.


An article about configuring SSL that doesn't 1) discuss trade-offs of security vs. resource consumption, 2) how to figure out your performance requirements, and 3) indicate the author really understands implications of decisions about crypto is an article you should probably disregard. Modern CPUs are so ridiculously good at crypto, and most sites have such ridiculously low connection rates, that optimizing for maximum performance at the expense of security is a fool's game in most cases. Instead, focus on measuring your real performance requirements first, and things like sane configuration of SSL, for example by explicitly listing ciphers instead of using the impenetrable +aNULL:-yourMom syntax.

Here's my vintage code for scanning SSL configs: https://github.com/b/tlscollect

Here are a couple of must read posts from someone who really knows his SSL business:

http://www.imperialviolet.org/2010/06/25/overclocking-ssl.ht...

http://www.imperialviolet.org/2011/02/06/stillinexpensive.ht...

It's great to learn.

Lil' B


Does any of this have anything to do with Matt's post? Adam's first post says the same thing Matt's does: DHE is expensive.

The "tradeoff" in security vs. performance you're referring to irrelevant to almost everyone building on nginx. If you've lost your RSA key, you are well and truly fucked. DHE is interesting, but sniping at people for not using it (in your case, implicitly) is unfair.


Adam's post is rather more thorough and nuanced, which makes sense since he actually understands SSL and benchmarking. While you might summarize them both as "DHE is expensive", I don't know why you would. Here is each post on DHE:

Adam - "However, with a pure RSA ciphersuite, an attacker can record traffic, crack (or steal) your private key at will and decrypt the traffic retrospectively, so consider your needs."

Matt - "Unfortunately, it also includes a very computationally intensive cipher using an ephemeral Diffie-Hellman exchange for PFS. Sounds scary already, doesn't it? ... The problem cipher is DHE-RSA-AES256-SHA [b]."

The first is factual and straightforward. The second is muddled and clearly skewed towards blindly disabling DHE. I believe we are in agreement that it is irrelevant to almost everyone building on nginx: their connection rates are so low they will not notice the overhead introduced by DHE.

I am sniping at enthusiastic ignorance and encouraging others to behave similarly. I hope that is all quite clear now.

Hugs and kisses, Lil' B


Are you a little worried that you come off sounding like "Adam is one of the cool kids and Matt isn't"? Matt's conclusion is ultimately correct.

And we apparently disagree completely about DHE, because you appear to be saying you'd recommend it to web startups, despite the fact that the bank that clears those startups transactions isn't even using it.

Especially weird given that Boundary, your startup, doesn't do DHE.


I think benblack's argument is that Adam can recommend disabling DHE because he knows what it is and what it does and can make an informed decision about whether or not your average SSL-enabled site needs it.

Matt simply says "I messed with my settings and leaving this one out makes it faster", without knowing whether or not turning DHE off is safe (or if he does know, clearly he's making it seem like he doesn't). The fact that it is safe -- in this instance -- isn't particularly relevant. The point is that someone who doesn't understand the security implications of something is making a recommendation about security, just cloaked in a recommendation about performance.

Anyway, I don't know any of the people we're talking about here, just trying to help clear up what I believe benblack was trying to say :)


Right is right. Wrong is wrong. Pants aren't shirts. It's clear Ben doesn't think Matt is qualified to write the post. But he should have holstered the impulse to gripe about it until Matt wrote something wrong.


Well, Matt did write something wrong. The original post about nginx "sucking" at SSL was wrong. Maybe it sucks for SSL in its default configuration (is that even that case, or was Matt's config copy/pasted from elsewhere?), but saying it sucks in general is incorrect and link-bait'y. You can presumably configure other web servers to suck just as much at SSL by enabling DHE ciphers and providing DH params.


We're commenting on this blog post. As was Ben, who didn't comment on the previous post, but did single this one out here and, as I recall, on Twitter.


It sounds like you're implying that when someone posts something on the internet, when you're evaluating the usefulness of that information, you should ignore anything else they wrote previously. Frankly I don't care all that much about Ben's motivations behind calling out the author here, but I read both blog posts as they came out, and the lack of attention to detail in the first post definitely affected my opinion of the second post. I don't think a lack of participation in the first HN discussion means you're disqualified from participating in the second one using information from the first.


I am recommending that people who do not understand the trade-offs and do not have the traffic for it to matter should probably leave those safe defaults alone. What the banks choose to do is unfortunate, but should not dictate behavior. If all the banks chose to jump off a bridge, etc.

Recommending people unfamiliar with configuring SSL leave defaults alone is only incompatible with our having non-default config if you are implying I don't understand configuring SSL. I doubt that is what you mean, as I am ever the optimist.

Yay!,

Lil' B


Turning off DHE is safe. I assume you agree with this, because your SSL server appears unable to do DHE. But whether you agree or not, ephemeral DH is not necessary for secure SSL. As Adam Langley pointed out himself: enabling DHE without knowing what you're doing can create more security problems, because your parameters can be insecure.

I'm having trouble parsing the rest of your comment. I don't have a religious belief about what defaults are reasonable to muck with and which aren't, but: this particular one is fine to change.


Speaking of fairness, how do you conclude that Ben is sniping at people for not using DHE? I've re-read his comment multiple times, and I don't see that in there at all -- explicitly or implicitly.


The words "indicate the author really understands implications of decisions about crypto is an article" were what set me off. I think I'm right; Ben doesn't really believe you need to use DHE, but for some reason doesn't think Matt rates highly enough to write a blog post about configuring SSL.


On a related topic, these are both your comments on this post:

"The win here is that losing the RSA key now only allows you to MITM future SSL/TLS connections. This is still a disaster, but it does not allow you to retroactively unwind previous DH exchanges and decrypt earlier captured sessions."

"If you've lost your RSA key, you are well and truly fucked."

Thanks for clearing that up!

Make love not war,

Lil' B


Oh. You're forum trolling. You know what though, it's always great to see you Ben. Thanks for taking the time.


Correct, disagreeing with you implies trolling. You are one astute dude, Tom. Keep up the good work!


I think that grasping at straws imply you're trolling, not disagreeing with someone else, although more than disagreeing with Tom it just seems like you're complaining...

I agree that both articles have different degrees of technical completeness, but really getting snarky because you believe that someone doesn't have the 'stripes' to write an article that ultimately agrees with the former article you're comparing it to seems to me like a waste of time. Specially since both articles basically come to the same conclusion. It gets worse when you pull two quotes from tptacek that ultimately don't change neither what he said in the beginning nor the reason he's responding to your posts.


So Nginx got unwarranted hate for having the most secure defaults. That sucks. I hope the user nginxorg -- whom I assume is Igor Sysoev -- who dropped by the previous thread (http://news.ycombinator.com/item?id=2752136), sees this post too.

Either way -- based on his attitude in the first post, I'm really surprised by how Matt owned up and did his homework for this one. (He should have done it from the beginning, of course, but none of the people bashing him in the previous thread actually provided anything to support what they were saying.)


It's not the "best/most secure". All the other servers can trivially enable EDH as well; it's OpenSSL that implements it, not nginx. Reasonable people can disagree on what the right default is, but plenty of financial institutions have made the studied choice not to enable it.


Plenty of financial institutions have also made the studied choice to limit the length and types of characters I can use in my password.


No. I recognize this as snark, but it's inaccurate snark. Banks limit password lengths because the programmers who implement their apps are dumb. But programmers don't choose the SSL configurations for their app servers and load balancers; people who are paid to think about security do. Your attempt at snark here relies on an apples-oranges comparison.


I'm not calling you wrong (I readily admit my knowledge on this topic doesn't even compare to yours), but are you really saying that these banks hire security experts to set requirements for SSL configurations on their load balancers and then don't use these security experts to set requirements for password security? That seems borderline malicious on their part.


The security team at a bank is lucky if they even have a list of all the applications in use across the enterprise. There are bound to be hundreds. When those apps have ridiculous password policies, it's not because a developer simply decided "this is the right kind of password policy for our app", so that a security person could just say "uh, no". No, the restrictions are set up that way because the app is build badly. Can you guess how much it costs to revise password storage and UX for tens or hundreds of applications?


Anecdote in support of your point: I was a developer who sinned in the creation of a terrible password storage system on an internal web-app that had about 50 users (why didn't I just use LDAP???). It was at a Fortune 500 financial. I was fresh out of college. The company had a large security organization in-house and very clearly documented software best-practices but I was cowboy coding on an Infrastructure team. I was the stereotypically bad programmer at a large company who makes grievous security errors. I'm not even very comfortable making this confession but I believe I've learned a lot since then.


Interesting. If financial institutions routinely succeed at operational security but catastrophically fail at having a secure development life-cycle, is that a startup opportunity?


I don't know about "startup opportunity" specifically, but it's pretty much the raison d'être for most application security consultancies.

Usually when first engaged, you deal with operational issues (making sure all the applications they know about are assessed), but as you build on that, you try and instill secure development practices (so that every new application they build doesn't have the same issues as the ones you've just spent months uncovering).

The number of large clients I work with who don't have any SDLC process is staggering (I'd say it's the overwhelming majority of them). For the most part, the small group of security people are tasked with trying to secure the multitudes of applications which in many cases are 20-30 year old codebases. Their developer groups may be completely separate (usually from the result of all the mergers of financial institutions) and it's basically all fiefdoms.

As you start to work up the pyramid of enterprise security "hierarcy of needs" you get to things like a secure development life-cycle, but not all organizations are "ready" yet for that type of work. Some are just trying to figuratively stop the percieved bleeding.


The teams (or more likely outside vendors) that set up the bank's external-facing servers and load balancers are not going to poke around the application code.

A bank will have architecture and security teams that evaluate the applications, but their main job is to run each application through a "best practices" checklist or audit to identify potential trouble spots. An application will need to meet some kind of sane minimum requirement for password security, but many of these apps are legacy or mainframe, and not easy to change. Big banks move very slowly.


If the institution chooses an insecure password policy, it heightens the likelihood it will fail to ensure good SSL settings.

This tendency is independent of the fact that these functionalities are implemented by different teams, and that one team might happen to be competent enough to do the right thing despite the lack of institutional imperative. So the consumer might get lucky. So what?

Your initial comment was that "plenty of financial institutions" do SSL a certain way, and the respondent correctly pointed out that this fact adds no information to the discussion about SSL techniques, because plenty of financial institutions do dumb things. It's "apples and oranges" only insofar as he's saying the orchard manager is a poison spreading dummy so you can't trust the apples or the oranges.

Maybe you can elaborate on the "studied" part of your comment with specifics. That part was interesting.


You use this word "the institution" as if companies were hive minds. Read the other comments on this thread. Again: the people who make decisions about password complexity are almost never the security people.


Exactly. You shouldn't trust a particular technique just because a financial institution uses it. They have very little institutional culture around security, as you yourself point out. So I'm not sure why you brought up the fact that financial institutions use a particular SSL technique - that tells us nothing.


Your argument is exasperating, because I already addressed this notion that password complexity requirements in banking apps have anything to do with what financial security people think are best practices for SSL/TLS.


I still have no idea why you keep bringing up the finance industry - it brings no credibility to this discussion or your points, which seem reasonable enough. Even the "security people" taken collectively have no keen track record so why are we talking about them collectively, again? No big deal, I just don't get it. Shrug.


Of course, in a perfect world, developers would be paid to think about security.


> All the other servers can trivially enable EDH as well

Unless I'm reading OP wrong, that's not the case for the servers he uses for his tests: Stud can't enable it at all:

> stud doesn't have at all.

and in stunnel you have to compile it in for support, it's not just "not enabled by default", it's not compiled in:

> stunnel has it as a compile time/certificate configurable option.


From stud -h:

    Encryption Methods:
       --tls                    (TLSv1, default)
       --ssl                    (SSLv3)
       -c CIPHER_SUITE          (set allowed ciphers)
Edit: Though, you'd need to set DHE params, as another commenter said below. Stud doesn't do this atm, but I'm open to a patch!


I think 'seiji's right, and OpenSSL won't actually do DHE handshakes unless you give it parameters, which is another 2 lines of code that aren't actually in stud.


nginx is a web server, like Apache. stud is a few hundred lines of trivial proxy code. And would you like to take a bet on how many lines of C code it would take to add support for configurable cipher suite modes in stud? Fair warning: I already know the answer to this (and I don't even know that stud doesn't allow it).


Do you happen to have a patch? I might be interested in that :)


That's the trick -- you can enable your DHE cipher all day long, but if the code doesn't set up DHparams, it will never work.

Here's a quick (no error checking) way to set up DHparams if they are appended to a cert:

    BIO *bio = BIO_new_file(path_to_a_file_with_dhparams, "r");
    DH *dh = PEM_read_bio_DHparams(bio, NULL, NULL, NULL);
    BIO_free(bio);
    SSL_CTX_set_tmp_dh(ctx, dh);
    DH_free(dh);
Where do the DH parameters come from? You can generate it yourself (1024 bits here):

    openssl dhparam -rand - 1024
For a completely isolated implementation (requiring no user certificate changes), see function ngx_ssl_dhparam in nginx-1.0.4/src/event/ngx_event_openssl.c


Doh! 'seiji wins. I was way too glib about DHE. Sorry.


Just a bit of history: I was the one that prodded Igor to add this.

I had been checking things out in nginx and noticed that DH was not implemented. One quick email to Igor and he got it done the same day along with getting this into the next version of nginx. Dude is bad ass.

Now if only someone can convince him or provide a patch to add SPDY support to nginx...


So what would you do with the generated DH parameters? Literally just `cat` them to the bottom of the SSL cert? Is there anything else that needs to be done? What happens when DHE-RSA-AES256-SHA is used without having those DH parameters in play?


Yup. Just add it to the end of your private key/certificate file (NB: only applies to stunnel when configured for DH or other programs reading DHparams from a key/cert file).

If you try to use only DHE-RSA-AES256-SHA without DH being setup, nothing will connect. If you have DHE-RSA-AES256-SHA as an option with others, it will negotiate a non-DH cipher. (e.g. "DHE-RSA-AES256-SHA:!ADH:SHA" -- you can verify the ordering with `openssl ciphers -v DHE-RSA-AES256-SHA:!ADH:SHA`)


Fair warning: this comment is apparently all kinds of wrong, but I leave it here for posterity.

Put:

  if(getenv("SSL_CIPHER_SUITES"))
    SSL_CTX_set_cipher_list(ctx, getenv("SSL_CIPHER_SUITES"));
anywhere after SSL_CTX_new().

But don't bother doing it with stud, because (as I sort of predicted) stud already does this: stud -c <ciphersuites>.

I don't understand what Matt is saying by "stud doesn't enable DH at all". Does stud build its own OpenSSL? The system OpenSSL will already support DHE.


Ask and ye shall receive: https://github.com/bumptech/stud/pull/6

stunnel DH code inserted into stud.


Fair point. Clarifying...


Unlike the above post, this fellow actually did some broad cipher testing (http://zombe.es/post/4078724716/openssl-cipher-selection), particularly around AESNI instructions in recent Intel chips.

With AESNI, use AES-128, AES-256, RC4-SHA, CAMELLIA-128. Without AESNI, use RC4-SHA, AES-128, AES-256, CAMELLIA-128.

In nginx, this looks like:

  # (wo/AESNI): ssl_ciphers RC4:AES128-SHA:AES:CAMELLIA128-SHA:!MD5:!ADH:!DH:!ECDH:!PSK:!SSLv2
  # (w/AESNI):  ssl_ciphers AES128-SHA:AES:RC4:CAMELLIA128-SHA:!MD5:!ADH:!DH:!ECDH:!PSK:!SSLv2
You eliminate weak ciphers. You retain RC4 for compatibility and speed. You order by performance. (Note that AES-128 is still ranked as secure through 2030 [at least]. You don't need to prefer AES-256.)


I can't tell if this is an apology or a non-apology. It seems to have elements of both.

Clearly the moral of the story is: "Don't claim that X sucks unless you are are damn sure".

Saying something sucks is fightin' words. Don't expect to people be nice if you are wrong.


Why would someone apologize for writing an informative blog post? I'm glad he wrote both, even if he had to walk the first one back a bit.


I found both posts informative, and yet I'm with cbetz on this one. The issue isn't whether Matt's first post was informative but is instead whether it was fair|wise|necessary to say "Nginx sucks at SSL" instead of, say, "My initial testing, which needs to be investigated further, is showing Nginx SSL performance lower than other alternatives." The first headline is more likely to grab folks' attention, which is probably why Matt chose that headline. It is probably that choice that cbetz finds objectionable, and if so, I agree with him.


  > "My initial testing, which needs to be investigated
  > further, is showing Nginx SSL performance lower than
  > other alternatives."
You keep using that word ('headline'), but I do not think it means what you think it means.


Okay, I'll bite. Here you go: "Initial Tests Show Slow Nginx SSL Performance" Wasn't that hard, was it?


My point was that you seemed to have purposely made that 'headline' that you thought he should have used needlessly verbose, to the point where it couldn't be considered a headline.


I don't know that it required an apology or if he was even really wrong in the first post.

The result was his digging into it to figure out the default config included a computationally expensive setup.

That's a debatable thing. Not necessarily wrong.


As far as I can tell, saying "Nginx sucks at SSL" is actually not debatable and is indeed just plain wrong. Yes, the default SSL config uses a cipher that is, relatively speaking, more computationally expensive than others. That doesn't mean Nginx "sucks at SSL." Don't like the default cipher? Great -- change it.

I found the posts to be informative, but I get the impression that the "Nginx sucks at SSL" linkbait headline is what's rubbing folks the wrong way.


  > Clearly the moral of the story is: "Don't claim that
  > X sucks unless you are are damn sure".
More like: "If you say something that upsets people, they will spend a lot of time and effort to display their ignorance in an effort to prove you wrong -- as if you had insulted their very being -- even though you just made and honest mistake." I read the original article, and I didn't see it as an attack on Nginx, or Nginx's SSL support. He ran some benchmarks and said, "Wow, those numbers suck."

  > Don't expect to people be nice if you are wrong.
I read the HN comments here and on the original post, and I don't remember anyone saying anything about cyphers. The people that were 'not nice' were also wrong. Would that excuse the author of the post, from turning around and being 'not nice' to those people because they are wrong?


"Final feeling: Twitter is better than HN in all social dimensions of engagement, kindness, and authenticity."

Ouch.


There is some selection bias and context that the OP had via Twitter (people following already kind of know him and his work, etc.) whereas HN evaluates his work solely on this one post.


Yeah, the correct stament should be something like:

Final feeling: My long time Twitter Followers & Friends are more engagement and kind than the anonymous programmers on HN.


Yeah, well, arrogantly complaining about people calling you out on unfounded speculation doesn't really work when you were originally quite arrogant in that speculation.


This is interesting to me. Maybe it's because twitter places much more emphasis on identity than HN comments? Twitter users are also more likely to be using that service to market their personal brand, rather than just to share whatever they're thinking, so are probably much more careful about what they say.


Twitter is better than HN? That's like comparing apples to a fish market.


It's great to follow up. Especially with such a particular detail.

It's also good to have thick skin. HN can be aggressive. But for good reason. I'd be willing to bet this tiny sting will result in more rigor in the future. I know it has worked that way for me.


changes: slow ssl encryption ciphers on by default, keepalive

before: nginx (ssl) -> haproxy: 90 requests per second

after: nginx (AES256-SHA with keepalive 5 5;) -> haproxy: 4300 requests per second


Any serious article ripping on the performance of something should at least link to the config file used.


This is why security is such a wierd/nice/confusing/irratating line of work to be in. Newsflash SSL is not a one size fits all secure you against anything technology. I did not see the original article so I won't pretend that I knew the answer ahead of time. I just hope that I did not accept SSL as being a onesize fits all completely uniform technical conmponent.

There is a Dave Chapelle joke about cops sprinkling crack-cocaine over a crime scene in order to make the case quick and easy. Too many developers trest SSL like magic pixie dust for security.

Or as ptacek says "thanks in advance for putting my kids through college."


I use a similar directive in my apache2 configuration. Would I see an performance improvement in removing the DH option from the cipher suite? Or is this only directly related to ngix and how it implements its ssl protocol?

Secondly, by removing the DH method do I restrict any browsers from connecting my site? Ie, are their any browsers, or security settings on browsers that prevent the site from being trusted if DH isn't available?


It's an openssl configuration option, so it does affect performance for Apache with mod_ssl. It's not specific to nginx.

If you do not allow DH ciphers, you'll probably just lose users who deliberately configure their software to use only strong ciphers.

What you can do is put DH ciphers at the end of the list. That way, weaker ciphers are preferred but you're still supporting strong ciphers.


I ran this against the slowest SSL website I know of. This site absolutely kills my phone web browser and I've been wondering about this problem for years. The site: manager.skype.com. The result? DHE-RSA-AES256-SHA. No wonder! Fix this guys!

In regards to this post, if this is the default configuration of Nginx then I agree that Nginx sucks. This is not a good default configuration for the Internet.


There's nothing wrong with using DHE algorithms, particularly if you're going to be transferring financial secrets around. If your phone can't keep up, well, then it probably should have a better entropy generator.


Are you sure the problem is entropy generation and not just extra bignum math? Also: there are plenty of major financial apps that are not allowed to use DHE, because DHE makes it impossible for the provider to monitor its own connections ("conventional" SSL/TLS allows for middleboxes that monitor and archive sessions by holding a copy of the server's RSA key).


What is best for security in regulated fields and what is mandated or forbidden for security in regulated fields are often almost disjoint sets.

Anyway, whether your phone has trouble coming up with the entropy, or preforming the math, you should probably be using something more substantial.


You're suggesting that people should pick a different phone so they can get PFS with the small fraction of SSL servers that support it?


I'm suggesting that if for some reason a site thinks that that sort of security is necessary, they shouldn't change their mind for the sake of people using their telephones.

Particularly since at the current rate of development, the average phone will be able to do it just fine in a few months.


It maxes has been maxing out my laptop CPU for an inordinate amount of time for years as well. These "financial secrets" are emailed in the clear after every transaction, so your point is moot.

Burgerbrain: PayPal.com, CitiCards.com, 2checkout.com, and many payment processors all use RC4-SHA or AES256-SHA. You are wrong.


Their patchy security doesn't render his argument invalid.

Edit: beachaccount: Financial institutions not using DHE is not a logically sound counter to "There's nothing wrong with using DHE algorithms, particularly if you're going to be transferring financial secrets around." While those institutions may chose not to, there is nothing wrong with others choosing otherwise. Additionally, patchy security on the part of one operator certainly is not a logically sound argument against this.

Furthermore, in the future, actually reply to a post in order to reply to a post. Not doing so unnecessarily confuses the flow of conversations (posts are not scarce resources).


"Patchy security"? What are you on about?


I'm refering to the portion of the comment I was responding to: "These "financial secrets" are emailed in the clear after every transaction"

The implication being that since they were sending the information in the clear in a separate part of their system, that they were wrong in configuring their https site as they did. I object to that conclusion.


Ah. Patchy works there. I'm sorry about that. Having a hairtrigger day.


No worries!


No, that indicates that these were not actually financian secrets, hence the quotes.


I'm going to go ahead and give the developers of the particular site the benefit of the doubt and not you, if you don't mind.

Furthermore, one particular site using https in a particular mode when they don't need to use https at all is still not a logical argument against other sites using https with those ciphers.


If you are exploring and testing SSL, the SSL Labs tools come in handy. For instance, see what Gmail and Github are doing:

https://www.ssllabs.com/ssldb/analyze.html?d=mail.google.com

https://www.ssllabs.com/ssldb/analyze.html?d=github.com


Guess I was right about the cipher used.

http://news.ycombinator.com/item?id=2753903


Thanks to Matt and everyone who contributed to figuring this out - and then figuring it out again. Comments on the nginx mailing list (which I encourage you to subscribe to if you're a user):

http://forum.nginx.org/read.php?2,212229


Ah, if only we had an occasional second upvote! This is far more useful than the original post, which was already well above average. If you are deploying nginx with SSL, you need to know about the configuration details in the article.


keywords in footer: nginx, openssl, ciphers, DHE-RSA-AES256-SHA bad, AES256-SHA good, hn, twitter, glee soundtrack


Look at his other posts. He just has joke keywords in all his posts.


Possible SEO play? Maybe not, considering his site is almost completely lacking on-page SEO. Perhaps an accidental copy/paste?


So what should ssl_ciphers be? Can't you just move that one to the end somehow?


So, are these the new numbers? I copied the numbers from the original and the new post:

  haproxy direct: 6,000 requests per second
  stunnel -> haproxy: 430 requests per second
  (OLD) nginx (ssl) -> haproxy: 90 requests per second
  nginx (AES256-SHA) -> haproxy: 1300 requests per second
  nginx (AES256-SHA with keepalive 5 5;) -> haproxy: 4300 requests per second
Did other things change or is nginx more than twice as fast as the next best solution?


Well at least he followed up. Most people don't bother to correct their mistakes.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: