Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In their documentation ( https://k6.io/docs/ ) they claim that

> JavaScript is not generally well suited for high performance. To achieve maximum performance, the tool itself is written in Go, embedding a JavaScript runtime allowing for easy test scripting.

How is it possible that pure go JavaScript interpreter (goja) with bindings for net/http and some reports would be faster than the same tool written in nodejs using its http-client (which if I remember correctly is written in C)?

I don’t mean to downplay the importance or usefulness of k6, I just find their reasoning behind choosing go somewhat contrived



I am not completely sure why the Go stdlib's HTTP client (which k6 uses) is faster than the NodeJS one. I think part of it is the fact that k6 spins up a separate JS runtime for each VU. goja is a much, _much_, slower JS interpreter than V8, but load tests are IO-bound, so that's usually not an issue. And you can spin up thousands of VUs in k6 (especially with --compatibility-mode=base), making full use of the load generator machine.

You can find some basic performance and other comparisons between load testing tools in this very long article of ours: https://k6.io/blog/comparing-best-open-source-load-testing-t...

And some advice for squeezing the maximum performance out of k6 in here: https://k6.io/docs/testing-guides/running-large-tests


What do you mean "very long article"? You could have used "extensive" or something ;)

Anyway, what I've seen when comparing the performance of tools, is that Artillery, which is running on NodeJS, is perhaps the wors performer of all the tools I've tested. I don't know if it's because of NodeJS or that Artillery in itself isn't a very performant piece of software (It also consumes a lot of memory, btw).

If you want the highest performance, there is one tool that runs circles around all others (including k6), and that is wrk - https://github.com/wg/wrk - very cool piece of software although it is lacking in terms of functionality so mostly suitable for simpler load testing like hammering single URLs.

(I don't know how fast wrk2 is, haven't benchmarked it)


I was skimming your "extensive" article again, and I saw this little gem I'd forgotten about: https://github.com/ragnarlonn/curl-basher ^^


Oh yeah, that's my own load testing tool, written in the high-performance language Bash. It is a bit feature sparse but produces about the same RPS numbers as Drill (https://github.com/fcsonline/drill) and not too far behind Artillery. And it is about 0.0025 times as fast as Wrk!


Hm, rlonn as in "Algonet"?


Possibly :)


Many benchmarks say JS-VMs are blazingly fast, at the same time others say they are battery-hogs. And Apple is even optimizing their CPU for it to run faster.

IMHO that is about JS-memory usage and GC, which never really get's proper attention.

You can have really fast algorithms, if you keep creating and forgetting millions of objects, which most JS-frameworks do, you will have lags and GC-pauses.


My guess would be that startup times to load all the JS libraries needed for a full tool would take quite a while.

In build scripts, it's important that you can start up each task (k6 instance) very fast.


> would take quite a while

No, and definitely not when comparing with the time it takes to run load tests.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: