An ab test of my new site, niflet.com, on a relatively cheap ($95/month) dedicated server:
Document Length: 11419 bytes
Concurrency Level: 10
Time taken for tests: 20.000 seconds
Complete requests: 43633
Failed requests: 43602
(Connect: 0, Receive: 0, Length: 43602, Exceptions: 0)
Write errors: 0
Total transferred: 523890489 bytes
HTML transferred: 514028301 bytes
Requests per second: 2181.64 [#/sec] (mean)
Time per request: 4.584 [ms] (mean)
Time per request: 0.458 [ms] (mean, across all concurrent requests)
Transfer rate: 25580.46 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 2 5 5.6 4 50
Waiting: 1 2 3.9 2 48
Total: 2 5 5.6 4 50
Percentage of the requests served within a certain time (ms)
50% 4
66% 4
75% 4
80% 4
90% 5
95% 5
98% 34
99% 40
100% 50 (longest request)
Requests "fail" because the document length varies. I'm pretty blown away by Go's performance, especially knowing how much processing goes into producing a custom response to every request. Also every response is gzipped. (I'm using Go's web server and Sqlite.) The theoretical limit of requests that could be served in a 10 hour day from this single cheap server is 78 million. Now I just need some real users...
In this case, you're doing everything native, so custom response == normal response.
This isn't something specific to Go either, you'd get the same using C code, delphi, or almost the same using Java (Java being not quite the same in practice, as most Java devs will use a heavyweight container such as Tomcat).