

Locust – Testing framework simulating the burstiness of an end user - tamentis
http://truveris.github.io/articles/locust/

======
skinp
I have used Locust heavily in the past few month for loadtesting various apps
and APIs. I've tried a couple different alternatives before settling with it,
including ab and Vegeta.

Having the power of Python for scripting my loadtest was probably what sealed
the deal for me. It allowed me to create a very powerful/reusable loadtesting
framework that I can easily adapt very quickly to any app. Python scripting
also allowed me to add features not build in to Locust's core itself like
additional logging or metrics reporting and reading API endpoints from
file/Redis/...

Also, being able to completely test all endpoints of an app at once is really
useful. Most of the time, what I really want to know is can the whole backend
handle the full traffic I'm expecting. Having clustering built-in helped a lot
here. I was able to scale the loadtest to several thousands of RPS by just
adding a couple more slaves...

The only feature I would like to see added is built-in Vegeta style graphs of
latency/RPS over time. By default, Locust only gives you real time stat of the
last second during the load test and a final average results in csv files.
Combining metrics reporting with a graphing engine like Graphite can fix that
though.

I highly recommend.

------
makmanalp
I've used locust before - had a good experience. The benefit here is that you
can more easily script and simulate "real"ish looking traffic as opposed to
hitting the same 20 random URLs - hopefully that'll give you a better idea of
your projected real world performance than running ab on one URL that's cached
after the first hit anyway.

------
simple10
Here's a list of tools and services for HTTP load testing and benchmarking. I
added Locust to the list. Let me know if the list is missing anything
important.

[https://github.com/simple10/guides/blob/master/load_testing....](https://github.com/simple10/guides/blob/master/load_testing.md)

~~~
sciurus
The Grinder, Gatling, Tsung, and JMeter to name a few.

------
jterfi
I've compared using Locust to Gatling and the only downside is that Locust
wasn't able to reach the same RPS I was getting with Gatling. It's a shame
because Locust is much nicer to work with in comparison.

~~~
the_mitsuhiko
> I've compared using Locust to Gatling and the only downside is that Locust
> wasn't able to reach the same RPS I was getting with Gatling. It's a shame
> because Locust is much nicer to work with in comparison.

That's hardly an issue as you can trivially build multi machine swarms with
Locust.

~~~
jterfi
When I could barely get 200 RPS from a locust instance, but 10,000 RPS from a
Gatling instance, this is a huge issue if you don't have an unlimited budget.

~~~
jonatanheyman
That sounds strange. Unless you're doing a lot of performance expensive stuff
in your test scripts, you should easily be able to reach a higher RPS.

Also, are you comparing a single Locust process to Gatling? Since Gatling runs
on the JVM, it would be more fair to compare it to a Locust cluster with one
locust process per processor core.

------
meesterdude
Interesting! this looks like the kinda direction i was thinking my cloudspeq
gem could go.

------
vezzy-fnord
What is the advantage of this over the variety of headless browsers and
acceptance testing frameworks that currently exist?

~~~
krallin
Locust is really focused on load tests (rather than acceptance tests). It's an
alternative to ab rather than one to a headless browser.

You definitely wouldn't use Locust to test a UI, or even to test your API
responds correctly, but you'd use it to know how many requests per second that
API can serve without falling over.

\---

Two key differentiators (advantages?) Locust has over ab are:

\+ It's scriptable, so you can design complex interactions like described in
the blog post (or even something as simple as logging in prior to hitting the
app).

\+ It supports clustering. It's easy to setup a multi-host Locust cluster, and
results are aggregated to your locust master.

I don't have an affiliation with Locust, but I did find it extremely easy to
setup and use when I needed it (for HTTP load tests with ~20K RPS).

I do recall a few scale issues (e.g. running out of client ports would break
the clustering), but with a bit of fine-tuning, it runs like a charm.

