Hacker News new | comments | ask | show | jobs | submit login
Nginx Performance Tuning for SSL (techsamurais.com)
46 points by pkandathil on Aug 1, 2013 | hide | past | web | favorite | 19 comments



Sacrificing security in exchange for a minor performance boost. How does your domain score with https://www.ssllabs.com/ssltest/ after disabling those various ciphers?

Please read this discussion thread from 2 years ago for a discussion on the pros/cons of this approach: https://news.ycombinator.com/item?id=2759596


Thank you for the info. I have updated the original article to reflect the changes.


This again?

Yes.. disabling DHE ciphers will speed things up. Please understand the security implications of what you're doing. The ephemeral Diffie-Hellman cipher suites are the only way to achieve that Perfect Forward Secrecy that's been all the rage lately (sure, there are plenty of ways to screw it up even then, but it's a prerequisite).

At least consider leaving tossing a few ECDHE ciphers at the start of the list. They're plenty fast, and are a good foundation for providing PFS for your users.


Just compared our SSL config (https://www.theticketfairy.com/) to the one at the end of the article (based on HN recommendations) and pretty happy that it was already set to pretty much exactly that (apart from us having 100m for the SSL session cache rather than 10m) :)

One more thing I'd advise is adding this directive if you're running Nginx 1.3.7 or higher:

ssl_stapling on;

The tech behind this is explained here: http://blog.cloudflare.com/ocsp-stapling-how-cloudflare-just...

Lastly, if you can be bothered to build Nginx 1.4 (1.4.2 is the latest version at the time of writing), you can enable SPDY support as well.


Perhaps someone more experienced can clarify, but is the gist of this article basically sacrificing perfect forward secrecy for more performance?


That's exactly what's happening.


I would love to know the answer too. Or may be nginx should fix the way it executes the encryption algorithm.


Good to see my conclusions from two years ago still hold: http://matt.io/technobabble/hivemind_devops_alert:_nginx_doe... (or its HN thread meritt kindly dug up: https://news.ycombinator.com/item?id=2759596)

Sadly, these days we want PFS everywhere to stop the snooping apparati, but if you're not really important and just want to stop local network or MiTM snooping, removing PFS should be okay (at least for my boring sites).


With ECDHE cipher suites, you can get the best of both worlds.


I am always extremely wary of any configuration changes that alter encryption algorithms. A simple typo can mean going from the exclusion of a weak cipher to the explicit inclusion of it.

One of the performance perks comes from the session cache. Is there an effective way to share that cache between machines serving on the same hostname? For instance: ten servers all serving round robin requests for www.example.com.


<quote>Change your SSL cipher settings to this: ssl_ciphers ALL:!kEDH!ADH:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; </quote>. SSLv2 is flawed (http://en.wikipedia.org/wiki/Transport_Layer_Security#SSL_1....). He should disable it.


Thanks I will update the article now.


> The web server is running on an EC2 t1.micro instance.

Why do people do this?! t1.micros run beautifully at load for 30 seconds then essentially stop entirely for a while... not to mention having much slower internet than even a m1.small.


  > Why do people do this?
My guess is: so they can utilize the free tier for a year. After that, I agree it makes almost no sense.


Oh, I'm happy to use it for side projects getting a few dozen hits a day. I'm continually baffled by folks running performance benchmarks against them, though.


.micro throttling is simply domain knowledge that not everyone has.


After the testing, you will see that we upgraded to a c1.medium instance.


OK, but initially trying to benchmark on a t1.micro indicates a lack of experience with EC2 that colors any other benchmarking you might be doing.

Add in others' comments about this basically compromising security for speed and this is a bit of an irresponsible article.


[deleted]


I have seen the calculation as written in this article many times. Is it wrong: http://nls.io/post/optimize-nginx-and-php-fpm-max_children

pm.max_children = (total RAM - RAM used by other process) / (average amount of RAM used by a PHP process)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: