Hacker News new | past | comments | ask | show | jobs | submit login

The point in this one is SpiderMonkey is beating V8 in the benchmark designed such that V8 wins on it.

Yeah, no. They chose intensive process, many of them are ports of code from other languages and some the tests are using software made by Mozilla such as pdf.jz and zlib (Ecmascripten) and Microsoft (Typescript compiler)


But they also included stuff like splay-latency which explicitly tests something that doesn't matter on the Web (support for incremental GC during script execution, which doesn't matter in browsers because the effects of a script are never made visible to the user until the script ends). Guess what, V8 has that and Spidermonkey doesn't.

Can you explain more what you think is unfair about the test? I've only given it[1] a brief read and it certainly is a synthetic benchmark, but it doesn't seem absurdly artificial.

I've certainly written JS that builds trees, modifies them, tears them down, and builds new ones, all within the same event loop turn, and so would certainly benefit from incremental GC in the cases when the data it's working over is very large (or it's just a part of an app that's already been burning through a large number of allocations).

[1] https://code.google.com/p/octane-benchmark/source/browse/tru...

Your app won't benefit from incremental GC during an event loop turn, because nothing your script did before the incremental GC slice will be made visible to users until the entire event loop turn is finished.

But what about the all the other processes in the device? There may be another process at the same time who may benefit from such GC, perhaps a video being played inside a plugin, or just Photoshop being run side-to-side with the browser and things like that.

It may not benefit the app but it may benefit the user, which is clearly a relevant metric.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact