

Hello Heroku World - Autobenching Heroku - iamclovin
http://blog.evanweaver.com/2012/02/29/hello-heroku-world/

======
hemancuso
Interesting post with impressively confusing graphing. Two splines for per
color, but stroked differently, with nothing two indicate which of the two
y-axes the particular stroke of a color belongs to. The legend shows only
color, but stroking an empty box, for some reason.

~~~
nviennot
I had to read it 5 times to understand the meaning...

From what I understand: the more dyno you have, the less each dyno can do.
Which is funny when you think about it, becuause your Heroku bill increases
linearly with the number of dyno.

------
blibble
if httperf is using select() with 65k odd sockets then that could be a
bottleneck...

the "dumb" C impl can be faster too! it should probably fork() a few times so
multiple accept()s can fight over the socket, which should probably be put
into in non-blocking mode, and should also turn nagle off.

for even more points you can reduce the copying of the trivial response from
userspace into the kernel using sendfile()/splice(), if you mlock() it into
RAM first!

the printf likely reduces the throughput by a large amount too!

(I've spent far too much fiddling with various syscalls for synthetic
benchmarks!)

~~~
whargarbl
Post says the driver was benched at 25k rps. But yeah that C implementation
sucks.

------
weirdcat
Python/Bottle dismal performance here is in line with Nicholas Piël's fingings
regarding WSGIRef server:

 _Disqualified servers: (...) WSGIRef, I obtained a reply rate of 352 but it
stopped reacting when we passed the 1900 RPS mark_

<http://nichol.as/benchmark-of-python-web-servers>

------
vetler
Interesting.

Didn't know Jetty was this bad. Or perhaps there are configuration options
that influence this?

The author concludes that Tomcat collapses, but in the 4 and 7 dynos scenarios
there doesn't seem to be that much difference between Tomcat and Finagle
(which the author says did ok), but perhaps I'm reading the graphs wrong?

------
minikomi
Looking at the app he used for bottle, it only replies (dynamically) on
/hello/:name .. the others (Sinatra, Node), are configured to reply on "/" as
the route. Is that the reason he got such consistently bad results for bottle?
Did he adjust the route used? Would like to see an actual equivalent "/" =
"Hello World" app tested..

------
azov
Hm... I briefly glanced over the graphs, and the results don't make sense to
me. Every single server he tested must do a C accept call at some point, plus
some (a lot of) extra stuff. If those servers take less time then plain accept
call, doesn't it just indicate that the benchmark is flawed? Am I missing
something?

~~~
whargarbl
AFAIK the issue is the way they listen() to the socket, as well as whether the
accept is concurrent in some way.

------
arete
Does anyone know of a good alternative to ab or httperf for load testing high-
performance HTTP servers? With httperf I can't see to coax more than ~25,000
requests/sec out of my framework built on Jetty, but ab can easily do > 45,000
req/s. Both seem to be limited by the load generator, not the server.

~~~
nl
There is siege, which looks ok (I haven't used it):
<http://www.joedog.org/siege-home/>

