

Boom – HTTP(S) load generator - bojo
https://github.com/rakyll/boom

======
trentnelson
Hmmm. I like the output better than wrk and wrk2, however, it doesn't seem
particularly efficient at generating high enough load in comparison to wrk:

    
    
        (trent@imac:ttys003) (Thu/09:54) .. (~s/gopath/bin)
        % time ./boom -cpus 8 -n 10000 http://panther:8080/plaintext 
        10000 / 10000 Booooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo! 100.00 % 
    
        Summary:
          Total:	8.4509 secs.
          Slowest:	1.1200 secs.
          Fastest:	0.0007 secs.
          Average:	0.0359 secs.
          Requests/sec:	1092.3152
          Total Data Received:	138465 bytes.
          Response Size per Request:	15 bytes.
    
        Status code distribution:
          [200]	9231 responses
    
        Response time histogram:
          0.001 [1]	|
          0.113 [9009]	|∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
          0.225 [0]	|
          0.336 [0]	|
          0.448 [0]	|
          0.560 [0]	|
          0.672 [0]	|
          0.784 [0]	|
          0.896 [0]	|
          1.008 [39]	|
          1.120 [182]	|
    
        Latency distribution:
          10% in 0.0023 secs.
          25% in 0.0053 secs.
          50% in 0.0100 secs.
          75% in 0.0161 secs.
          90% in 0.0255 secs.
          95% in 0.0343 secs.
          99% in 1.0158 secs.
    
        Error distribution:
          [769]	Get http://192.168.1.244:8080/plaintext: dial tcp 192.168.1.244:8080: can't assign requested address
        ./boom -cpus 8 -n 10000 http://panther:8080/plaintext  0.88s user 2.40s system 38% cpu 8.463 total
    

Ok, so, that took ~9 seconds to run 10,000 requests. Let's run wrk for 9
seconds and compare:

    
    
        (trent@imac:ttys003) (Thu/09:55) .. (~s/wrk)
        % ./wrk --latency -d 9 -t 8 -c 8 http://panther:8080/plaintext
        Running 9s test @ http://panther:8080/plaintext
          8 threads and 8 connections
          Thread Stats   Avg      Stdev     Max   +/- Stdev
            Latency   459.03us  263.24us   6.84ms   90.59%
            Req/Sec     2.27k   285.87     3.22k    76.54%
          Latency Distribution
             50%  407.00us
             75%  520.00us
             90%  675.00us
             99%    1.43ms
          153940 requests in 9.00s, 27.60MB read
        Requests/sec:  17111.78
        Transfer/sec:      3.07MB
    

That served 15x more requests?

~~~
nXqd
I find wrk is well optimized and it uses C. So it's hard to beat that :)

~~~
outworlder
Does it matter that much for such an I/O heavy usage?

------
chrisfarms
Looks closer to siege[1] than ab.

[https://www.joedog.org/siege-home/](https://www.joedog.org/siege-home/)

------
willemlabu
Best bit of the README:

> Speed index: Hahahaha

~~~
CatsoCatsoCatso
A quick Google shows this is an unreliable measure, could you explain the joke
further?

------
bluesmoon
How do you handle the case where the load generator runs out of resources (ie,
file handlers, tcp sockets, network interface throughput, etc.) before it is
able to generate sufficient load to test the server?

~~~
mryan
Step 1) Increase the resource limits as much as possible Step 2) Distribute
the load generator across multiple machines

