
Benchmarking Nginx with Go - dgudkov
https://gist.github.com/hgfischer/7965620
======
stock_toaster
For nginx http proxying try enabling backend keepalives.

In the upstream block add:

    
    
          keepalive 60;
    
    

In the server (or location) block add:

    
    
            proxy_http_version 1.1;
            proxy_set_header Connection "";

------
CSDude
Overhead is not negligble because you are only printing hello world, the only
I/O here is writing to repsonse socket. When you do other I/O, where the
requests depend other sources, the overhead will be neglible because your http
requests will be actually waiting for some other I/O to happen, which is the
case of almost every program. Plus, you will have nice features with Nginx,
where you don't have them in plain Go.

------
fleitz
I'm wondering how this works for real world situations in which the HTTP
server must slowly trickle responses back to the client. In the past I've seen
major speed ups by using a store and forward proxy.

Since this server always prints hello world it would stand to reason that
Ngnix should be caching the result.

------
hgfischer
Hi, I'm the author of this post.

The initial purpose of this test is to compare the different ways of
connecting Nginx to Go.

It doesn't makes sense to test against a heavy task in Go instead of this
single static string. Nginx in front of Go will not perform better under this
circumstances. It will only perform better if static content is being served
directly by Nginx or if caching is enabled, which is not the purpose of this
test.

Following what some folks suggested, I also made some recent changes, like
swapping ab for wrk and tuning nginx to disable gzip and enable keep-alive
connections.

The results are very different now.

------
oakaz
I also did some benchmarks previosly, for the nginx alternative that I wrote
in Go, boxcars:
[http://github.com/azer/boxcars](http://github.com/azer/boxcars)

comparison of nginx and boxcars:
[https://gist.github.com/azer/5955772](https://gist.github.com/azer/5955772)

------
aidenn0
ab needs to die. Yesterday, if possible.

~~~
jonahx
what's your favorite alternative?

~~~
bhauer
Wrk [1] is my preference. Time-limited tests. Average and standard deviation
on latency. And now with Lua scripting.

If you must use an ApacheBench-style tool, at least use the multi-threaded
clone named WeigHTTP [2].

With high-performance servers, ApacheBench is a limiting factor.

[1] [https://github.com/wg/wrk](https://github.com/wg/wrk)

[2]
[https://github.com/lighttpd/weighttp](https://github.com/lighttpd/weighttp)

------
corresation
I will take the advice provided the day I deploy a Go app that prints a single
static string to the stream. In other words, absolutely never.

This is the core problem with such benchmarks - that 'overhead' quickly
becomes proportionally irrelevant when you're actually doing something worth
doing. But with Nginx in front, suddenly you have so much flexibility without
reinventing the wheel, including load balancing, mixing server technologies
with ease, not dealing with static junk in your go code, proxy caching
(recently used this to really good effect with a Go service, putting zero
caching in the Go code and instead using standard http expiration headers to
allow Nginx to do the magic), anti-DOS, streaming compression, security, SPDY,
and on and on.

As fair disclosure, I have written on this before -
[http://dennisforbes.ca/index.php/2013/08/07/ten-reasons-
you-...](http://dennisforbes.ca/index.php/2013/08/07/ten-reasons-you-should-
still-use-nginx/)

~~~
mrweasel
I'm not sure how big the issue is, but I would add the ability to run your Go
app as an restricted user.

Using Nginx or another webserver in front of your app means that you won't
have to deal with privilege seperation yourself. Just run the Go binary in a
chroot as an restricted user and let Nginx deal with the binding on port
80/443.

~~~
BarkMore
If all you want is binding to port 80/443, then it's simpler to use iptable
port forwarding than Nginx.

