

How to Generate Millions of HTTP Requests (2012) - vinnyglennon
http://dak1n1.com/blog/14-http-load-generate

======
squiguy7
Another tool I have been using on my personal project is wrk [1]. It's pretty
simple to use and can generate thousands of connections with multiple threads.

[1]: [https://github.com/wg/wrk](https://github.com/wg/wrk)

~~~
meteorfox
There's a branch from Gil Tene (Azul) that fixes wrk for the coordinated
omission problem, which he explains in the readme of that same repo[1].

[1] [https://github.com/giltene/wrk2](https://github.com/giltene/wrk2)

------
dom96
There are plenty of different HTTP benchmarking applications for Unix OS' but
I still have not found a good one for Windows. Anybody come across one?

~~~
bhauer
I am under the impression that it's possible to compile Wrk for Windows [1],
although I have not personally done so.

Incidentally, I've not read the linked article, but skimmed the first few
paragraphs. The numbers of requests per second that he was able to generate
with various tools seem remarkably low. ApacheBench is the least performant
load generator since it's only single-threaded, but it can generate tens of
thousands (about 25,000) requests per second on a i7 desktop machine. Wrk can
generate significantly more since it uses all CPU cores of the load-generation
machine.

[1] [https://github.com/wg/wrk/issues/49](https://github.com/wg/wrk/issues/49)

------
nodesocket
A paid service, but the best I've seen is
[https://blitz.io](https://blitz.io). You can choose region(s), number of
requests, multiple endpoints, and lots of curl like options.

~~~
mryan
I used to be a big fan of Blitz.io, and used it a lot for benchmarking my
client's services. However they changed their payment model so the only option
is to sign up for a monthly subscription, which does not fit in with my use
case.

Does anyone know of an alternative that uses a pay-per-use option? In the
meantime I have reverted to Bees with machine guns.

~~~
nissehulth
[https://loadimpact.com/](https://loadimpact.com/) seems to have pay-per-test
as an alternative to subscriptions.

------
nopal
Here's one to spin up some temporary EC2 instances:
[https://github.com/newsapps/beeswithmachineguns](https://github.com/newsapps/beeswithmachineguns)

------
alexgartrell
> One important thing to keep in mind when load-testing is that there are only
> so many socket connections you can have in Linux. This is a hard-coded
> kernel limitation, known as the Ephemeral Ports Issue. You can extend it (to
> some extent) in /etc/sysctl.conf; but basically, a Linux machine can only
> have about 64,000 sockets open at once. So when load testing, we have to
> make the most of those sockets by making as many requests as possible over a
> single connection. In addition to that, we'll need more than one machine to
> do the load generation. Otherwise, the load generators will run out of
> available sockets and fail to generate enough load.

If the # of ports is the bottle neck, you can configure more IPs on a single
host.

~~~
nqzero
ipv6

------
larsvegasGT
Disclaimer: I’m talking about my startup :)

StormForger ([https://stormforger.com](https://stormforger.com)) is cloud
based Load Testing as a Service.

If you like to create test cases using a JavaScript DSL, run load tests and
performance analysis and let us take care of all the provisioning and data
analysis stuff: feel free to sign up for our private beta. Just drop us a line
if you have further technical questions.

Talking about big numbers in this thread you may want to read this thread as
well:
[https://news.ycombinator.com/item?id=7920930](https://news.ycombinator.com/item?id=7920930)

------
mickeyben
One tool I used a lot recently is vegeta [1]. It can be used both as a cli and
a go library. You can generate html (with nice plotting), csv or json reports
and launch distributed attacks.

[1] [https://github.com/tsenart/vegeta](https://github.com/tsenart/vegeta)

------
marcosnils
Here's another #golang http benchmarktool which is pretty cool. It's pretty
much like ab / httperf with the benefit that prints an histogram of the
requests in the end which is very useful.

------
otterley
See also Siege ([https://www.joedog.org/siege-
home/](https://www.joedog.org/siege-home/))

~~~
ecdavis
I used Siege a few years ago for some benchmarking and was not impressed. It's
a pity because I really like the interface, but Siege is technically lacking.

There is a warning in the configuration file against using keep-alive with no
explanation as to why. That makes it useless when you need to benchmark
persistent connections.

Each Siege "user" (i.e. client) has it's own thread, which means they consume
a large number of resources. Depending on your CPU, you will also hit a point
where more time is spent switching between threads than actually sending
requests. I wasn't able to exceed 400 users on my 2011 test machine. That
makes it useless when you need to benchmark large numbers of connections.

------
KedarMhaswade
Has anyone used [faban]([http://faban.org/](http://faban.org/))?

------
lectrick
Apparently, if you want millions of requests per second, your app has to be
built on Erlang ;)

------
vkat
+1 to JMeter, you can create complex http requests to a site simulating user
behavior.

------
lemu84
If python aint problem, I would suggest using locust.io

------
liquidmetal
On an unrelated note, that website looks so 2011!

------
sethammons
loader.io (disclaimer, it is a product of a division of the company with which
I work).

------
joshstrange
Good series of posts but can we get a [2012] tag on this?

~~~
dang
Sure; done.

