Given nginx's track record, I prepared to give nginx the benefit of the doubt and assume that it's a invalid test.
That said, I'm going to run some benchmarks of my own.
I'll try to reproduce these results tomorrow, but if I had to guess, I'd say ssl_session_cache was left to its default (off) which means that every connection has to do the expensive SSL handshake.
I tested nginx as a proxy, serving static files, and serving nginx-generated redirects. I tried changing all the relevant ssl parameters I could find. All setups resulted in the same SSL performance from nginx. I even tried the setup on more than one server (the other server was quad-core nginx got up to 75 requests per second).
So "all the relevant ssl parameters I could find", no details about what those involve, and the surprising result that it made no difference.
In the same situation, I might think I was doing something wrong...
And then this overarching statement:
Never let nginx listen for SSL connections itself.
So it wouldn't need more than a few articles, like "nginx + ssl = works like a charm", or "nginx has better SSL support than Apache". It would not matter whether those were actually correct, just a single article of questionable quality would be sufficient.
"nginx does not suck at ssl"
That way, it is possible to get some initial feedback and maybe even some good hints that help speeding up the analysis. In the best case, the announced analysis could become a collaboration by multiple authors.
I do think academics overcorrect on this, and should share more early results, possibly via things like blog posts (this is slowly starting to happen). But erring in the opposite direction is also quite common among tech bloggers. In particular, if you're going to publish anything that looks vaguely like a benchmark, it might be worth taking at least a few days to check out possible problems before sending it out into the world (not months or anything, but a few days).
That said, whilst RPS is low, whilst I'm hammering it with AB it seems to have little problem being responsive in my browser :S
On a 4 core Xeon E5410 using ab -c 50 -n 5000 with 64 bit ubuntu 10.10 and kernel 2.6.35 I get:
For a 43 byte transparent gif image on regular HTTP:
Requests per second: 11703.19 [#/sec] (mean)
Same file via HTTPS with various ssl_session_cache params set:
Requests per second: 180.13 [#/sec] (mean)
ssl_session_cache builtin:1000 shared:SSL:10m;
Requests per second: 183.53 [#/sec] (mean)
Requests per second: 182.63 [#/sec] (mean)
Requests per second: 184.67 [#/sec] (mean)
The cache probably has no effect because each 'ab' request is a new visitor. But I'd guess the first https pageview for any visitor is the most critical pageview of most funnels.
Use "openssl speed -elapsed" to test performance on your system.
Also, the article author apparently added support to stud (in his own fork) for x-forward-for. I don't think this is required any longer, due to this fairly recent stud commit: https://github.com/bumptech/stud/commit/9d9b52b7d3ce90fa84c6...
It would be interesting to see stud with a session cache too.
Thanks for the consideration!
initial inclination is not to merge the bulk of this into stud mainline
I agree. The HTTP stuff is still too integrated. ifdefs are ugly.
The solution is to do what showed up when I was 99% done working on XFF -- the nice PROXY protocol addition. We just need to get PROXY support into nginx now to obviate my XFF machinations.
You're correct in assuming the library enforces its own size limitations. It operates on length of received SSL data which is capped by the static receive buffer at 32k. Nice and tiny.
(Also, you are, of course, painfully correct about lack of bounds checking and lack of return value checking on the malloc/realloc calls. If I ever graduate the branch to production status, the six malloc calls and three realloc calls will be wrapped in proper checks.)
I'm not saying there are no cases where recovering from allocation errors would be possible but it's not the general case. It's usually easier to treat any allocation error as a fatal error and insure your programs so they don't run out of memory through other means.
Also, there's no bounds checking on size so in certain conditions such as a 2GB/4GB allocation you may allocate zero bytes or -2GB bytes.
The author benchmarks nginx at 26,590 TPS on a quad-core 2.5 ghz amd system.
He's concerned about the raw speed of the SSL calculations, not requests per second, but if you're actually concerned about SSL speed and you have enough requests per second to justify optimizing SSL speed, it could be pretty useful.
It made a significant performance difference to me.
I haven't done extensive benchmarking on it, but very knowledgeable people vouch for it.
This is using nginx strictly as an ssl termination, where I need to do some header manipulation that I couldn't do in stunnel/stud.
I remembered I had an older 8 core server sitting unused at the moment. I configured nginx with 8 workers, and ran `ab` against it. From a single (VM) host, I can get 680 connections per second (maxed the cpu on the host running the test). From 4 hosts, each host got > 290 connections per sec, so I got nginx up to over 1190 new connections per second, and can likely push it further.
[EDIT] got it to peak at 1535 requests per second with 4 hosts testing.
The article needs way more detail.
(on an 8 core server...)
haproxy direct: 6,000 requests per second
stunnel -> haproxy: 430 requests per second
nginx (ssl) -> haproxy: 90 requests per second