
Siege – HTTP load tester and benchmarking utility - enesunal
https://github.com/JoeDog/siege
======
Veratyr
Siege is great! Really nice and easy to use.

Also noteworthy is Tsung[0], which is (far) more complicated but can speak
more protocols, be configured to emulate user behaviour a little more closely
and send crazy amounts of traffic.

[0]: [http://tsung.erlang-projects.org/](http://tsung.erlang-projects.org/)

~~~
robbles
Tsung also has the interesting property of modeling traffic as individual user
sessions, with probabilistic thinktimes and dynamic variables. Makes it
possible to test your app in a much more realistic way than most tools, which
just benchmark a series of identical requests or replay a log.

------
AYBABTME
As far as I can tell, siege/vegeta/ab/boom/wrk are all affected by Coordinated
Omission, which makes their results of little value. The only tool I'm aware
of doing this properly is wrk2. I'm not sure about the distributed tools, free
JMeter or hosted like gatling.io and blitz.io, but I'm fairly certain that
they all suffer from the same problem.

If you're reaching for a tool like that and can code in Go, implementing a
correct one is very easy (~200 loc): just spin up new goroutines for each
request at a regular interval.

~~~
robbles
If you write your own, how do you suggest avoiding the same coordinated
omission error as those tools?

~~~
AYBABTME
Those tools all make use of a pool of workers. Say you want 1000 RPS, they'll
make something like 100 workers and make them do 10 RPS each. To make 10 RPS
each, their requests all must take at most 100ms each. The issue of
coordinated omission arises when those requests take more than 100ms each and
the rate of requests your measuring becomes distorted. This is well explained
if you read Gil Tene:

[https://groups.google.com/forum/#!msg/mechanical-
sympathy/ic...](https://groups.google.com/forum/#!msg/mechanical-
sympathy/icNZJejUHfE/BfDekfBEs_sJ)

How to avoid this error yourself? Don't make a pool of workers, have each 1000
requests be their own thread/goroutine. So you'll create 1000 goroutines per
second, each of them doing their own request. This way you won't omit
measurements by skipping beats.

~~~
robbles
great explanation, thanks!

------
anonfunction
I've used siege for a long time and never had any problems.

However, I'm easily tempted by the new and shiny which led me to find another
great HTTP benchmarking tool:
[https://github.com/tsenart/vegeta](https://github.com/tsenart/vegeta)

~~~
voltagex_
How do siege and vegeta compare to apachebench?

~~~
anonfunction
Siege is very comparable but vegeta has a lot more output options such as
graphs, JSON, CSV, etc...

Another tool I forgot to mention was wrk (and now wrk2) which are unique in
that they are scriptable via lua.

~~~
tobz
As a small follow-on, there is also Barrage [1], which is more of a framework
than anything else. kellabyte wrote it to benchmark changes to Haywire [2], so
it's geared towards configuring your client and server, and then collecting
all of the metrics from both the client side, and the server side, and
displaying them in an easy-to-digest format.

This obviously isn't terribly useful if you're just trying to gauge request
rates, or apply load, but it's very useful if you need a repeatable way to not
only measure request rates but see how your server is doing under load.

[1]
[https://github.com/kellabyte/barrage](https://github.com/kellabyte/barrage)

[2]
[https://github.com/kellabyte/Haywire](https://github.com/kellabyte/Haywire)

------
VeejayRampay
The only problem I've had with siege and similar tools is that they're good to
simulate load but from my perspective, they're not as useful when it comes to
simulate real-life load, as in multiple IPs, different bandwidths, bursts,
etc.

The only services that I found to simulate real life usage were paying
services, which is understandable. Maybe in the future with some bittorrent-
like load testing framework, I don't know.

~~~
dalyons
ive had a lot of success with Locust. It makes it super easy to simulate
actual realistic multi-step user flows with state (eg sign up, then create a
group, then add some participants etc), and then to probabilistically weight
the frequency of each flow (match it up with how freqently your users do
particlar actions). if you do this for say your 3-5 most common user flows, we
found you can get a pretty realistic simulation of real load. You can also
distribute the "locusts" over a cloud of servers, coordinated via ZMQ.

big step up over seige/AB imho

------
jfindley
For very high traffic loads, I have yet to find anything that compares to wrk
([https://github.com/giltene/wrk2](https://github.com/giltene/wrk2)). IME, it
scales better and more efficiently than anything else I've tried, short of
complex distributed tools that are only relevant to a small number of people.

------
busterc
similar:
[https://github.com/hellgrenj/hulken](https://github.com/hellgrenj/hulken)

------
mitchtbaum
similar (Golang):
[https://github.com/rakyll/boom](https://github.com/rakyll/boom)

------
hamilyon2
I am surprised nobody mentioned tank ([https://github.com/yandex/yandex-
tank](https://github.com/yandex/yandex-tank)) yet. I found it highly
configurable, relatively easy to use. For those who is interested in good
perfomance and features, it might be a good choice.

------
siscia
I am using siege to test my last project, Numerino[1] a priority queue, my
experience has been great, easy to use and very performant.

[1]: [https://github.com/siscia/numerino](https://github.com/siscia/numerino)

------
zeisss
and there is also Gatling.io ;) [http://gatling.io/](http://gatling.io/)

