It's unrealistic in that it's much worse than an average application would contain. It's still useful to determine a theoretical "worst case" that could occur due to usage of promises here. Further test cases are performed that explore more "realistic" cases. The point of a microbenchmark is to condense what is being tested into a small block of code, which is done here. Through this test, it's possible to determine purely the overhead of the promise on the function, as there's no modified blocking behaviour of IO calls, etc.
The point isn't about promises being used to perform calculations, it's about functions that are marked as async because a child function is marked as async.
> It's still useful to determine a theoretical "worst case" that could occur due to usage of promises here.
I disagree. It's not useful if the benchmark is both unrealistic and only applies to theoretical cases. As said: no sane developer would ever do this kind of calculation "async."
The author of this benchmark could have spent the exact amount of time writing an article that benchmarkes towards realistic and practical situations, like reading/writing from/to disk, communicating with an API, a database etc.
You could argue that the benchmark is _only_ about the async/await/promise overhead, but that doesn't make sense either: everything you do in a script, application and what not is basically "overhead" at some point. For example, if you want to calculate the fibonacci out of 10 given numbers, just do it with a calculator and store the result in an array. Anything else is overhead. Actually, using an array is probably also overhead, depending on the situation at hand.
Please, benchmarkers: use _realistic_, real world, cases. No sane developer would use promises to do calculations.