Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Mitata – Benchmarking tooling for JavaScript (github.com/evanwashere)
92 points by evnwashere 6 days ago | hide | past | favorite | 25 comments
I always had a sweet tooth for how easy it is to use google/benchmark, but when working with js, current libraries didn't feel right and some were not even accurate enough, so I decided to create my own library to make JavaScript benchmarking tooling better.

With more free time, I finally implemented all features I wished for in 1.0.0 and made a lightweight C++ single-header version for moments when google/benchmark is too much.

Hope this library helps you as much as it does me.






Is it by accident that "mitata", the project name, means "to measure" in Finnish?

it was hand picked for exactly that :)

Hi! In your benchmark, do you use a fixed number of iterations to stop the test, or do you apply a statistical criterion, such as the Student's t-test, to determine when to stop?

i didn’t want it to be complex so it uses simple time budget + at least x amount of samples, both and more can be configured with lower level api.

in practice i haven’t found any js function that gets faster after mitata’s time budget (excluding cpu clock speed increasing because of continuous workload)

another problem is garbage collection can cause long pauses that cause big jumps for some runs, thus causing loop to continue searching for best result longer than necessary


This is awesome! I’ve been working on optimizing a javascript library recently and am feeling the pain of performance testing. I’ll check this out.

Wow, I was just looking how to benchmark a streaming JSON parser that I'm working on! I'm creating it specifically for performance-intensive situations with JSON strings sizes up to gigabytes, and I thought that I had to implement about half of the features you mention there, like parametrisation and automatic GC after every test.

When you say streaming JSON parser, do you mean that it outputs a live “observable” object as it is steaming, or that it just doesn’t keep the entire source data in memory? I’ve done some work for the former for displaying rich LLM outputs as they are delivered - it’s a surprisingly underexplored area from what I’ve seen.

It means that prior to parsing JSON, parser is given exact path (or paths, or wildcards) it must retrieve, and then it will scan the string in one forward path with minimum possible allocations. It's for cases where you, for some reason, have to process enormous amount of serialised objects as strings, and need to get just a few small things out of them occasionally, and do it in JS.

As it processes input in batches, you can also use it in cases where you don't even need to load the whole input data in memory, if you chose so.


Definitely going to try this out!!

I’ve been using the `vitest bench` command; being able to slap a `.bench.ts` file next to a module and go to town is convenient: https://vitest.dev/guide/features.html#benchmarking


That's very interesting. I saw some screenshots of benchmarks used by Bun and Deno on Twitter at that time and it inspired me to suggest to Vitest to add a benchmark command. Later I learned that they were all using Mitata internally. https://github.com/vitest-dev/vitest/issues/917#issuecomment...

Very good work and design, glad to see a stable 1.0 is released!


vitest is nice but it’s completely unsuited for micro-benchmarks as it ends up oom crashing after just 2 optimized out benchmarks

Yeah, I’ve hit those OOM issues with vitest before too. Mitata’s time budget + sample approach sounds like a solid way to keep things simple while avoiding those long GC pauses. Excited to give it a try on my own benchmarks!

Any plans for web compatible output?

I maintain this repo, and we hand roll the stats page, but if we could get that for free it’d be so great!

https://github.com/moltar/typescript-runtime-type-benchmarks


I have been thinking of reusing/creating something like https://perf.rust-lang.org/ that lets you pick and compare specific hash/commit with all data from json format

Hey, I wrote about this once! I use it a ton. Thanks for your work. I can’t wait to dig into 1.0.

The lack of miata is always the answer comments in this thread and the readme are troubling.

This is for "headless" JavaScript outside the browser, right?

Never heard JS called "headless". Not sure I like it.

edit: all JS is "headless". almost all languages are headless. _Software_ can be headless or have a GUI. but languages are naturally headless.


Headless browsers. I guess this is a very closely related concept.

There’s a lot of server side js. Mostly plumbing code but there’s certainly “headless” js

I'm very aware of JS run on servers. And I knew that's what OP meant. I'm saying I'm not sure I like the usage. Maybe it's a generational dev vocabulary thing... I prefer "browser" or "client" JS vs "server" or backend

It works anywhere where javascript works, so you can easily run it in browser too. Tho idea of making jsbench like website but with mitata accuracy (+ dedicated runners) keeps bugging me.

wow! what a timing! I started building Speedrun yesterday to accommodate my daily needs

https://toolkit.pavi2410.me/tools/speedrun

https://github.com/pavi2410/toolkit/issues/8


do you support running the benchmark in a web worker?

yes, you can import mitata inside web worker and run it there, if you only need raw results you can even use lower level api https://github.com/evanwashere/mitata?tab=readme-ov-file#giv...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: