author of the library here. The native implementations have to follow the spec exactly, which means that they have to guard against specific edge cases. The reimplementations in fast.js do not follow the spec exactly, they optimise for 99.9% of use cases and totally ignore the 0.1%. This, combined with the fact that the JS is JITed straight into machine code anyway gives fast.js a significant advantage - it does less work and so is faster.
native functions often have to cover complicated edge cases from the ECMAScript specification [...] By optimising for the 99% use case, fast.js methods can be up to 5x faster than their native equivalents.
They do slightly different varieties of apples -- ignore some BS edge cases that few ever use in the real world.
If I rewrite project X into X v2 and throw away 2-3 seldom used features in the process, it doesn't mean that comparing v1 and v2 is meaningless. You compare for what people actually use it for -- the main use cases. Not for everything nobody cares about.
Developers use forEach to apply a function to every element in an array. That is the main use case.
This faster version fails on some arrays (any array that isn't contiguous). This is a bug, not a feature. The library user is now responsible for checking that the array is contiguous before using the faster function.
Is it documented somewhere whether array returning/manipulating functions cause non-contiguity?
>This faster version fails on some arrays (any array that isn't contiguous). This is a bug, not a feature.
I'd argue it's actually a future.
For one, it's not forEach for one. It doesn't mask the native implementation. It exists in its own namespace.
Second, it's not like forEach doesn't have its own problems. Like not being backwards compatible to older browsers without shims.
Third, nobody (or very very small values of "somebody") use non contiguous arrays with forEach.
The speed is a great feature, especially if you don't sacrifice anything to achieve it.
>The library user is now responsible for checking that the array is contiguous before using the faster function.
The library user already knows if he is using contiguous arrays or not. After all, it can fuck him over in tons of other cases, even a simple for loop, or a length check, if he treats one like the other. So it's not like he doesn't already have to be vigilant about it.
From a computer science perspective it is formally 'meaningless' because the performance improvement was made by changing the requirements, not by a novel algorithm or data structure. This is always possible, but it ignores the original constraints of the problem and hence doesn't shed any additional meaning on how to solve the original problem.
From a programmer perspective, that might not change the fact that it's useful. It has 'meaning' to the programmer in that it helps us solve a particular problem. Typically, app programmers are not as concerned with how the problem was solved.
I like the library, but to avoid this kind of criticism, don't call it a reimplementation. Call it what it is, an alternative library of functions.
>From a computer science perspective it is formally 'meaningless' because the performance improvement was made by changing the requirements, not by a novel algorithm or data structure.
I've studied computer science and I don't recall seing any such "formal" definition of meaninglessness.
Nothing in computer science tells you you have to have the exact same behavior in 2 APIs in order to measure some common subset of functionality. You can always restrict your measuremnts to that. Computer science is not that concerned with APIs anyway.
So you can use very formal computer science to compare the performance in big-O of X vs Y implementation of the common supported cases (namely, contiguous arrays).
>This is always possible, but it ignores the original constraints of the problem and hence doesn't shed any additional meaning on how to solve the original problem.
There's no "original" problem that one has to take wholesale. There are use cases, and some care about X, others about Y.
Scientific papers in CS compare implementations of things like filesystems, memory managers, garbage collectors etc ALL THE TIME, despite them not having the same API and one having some more or less features. They just measure what they want to measure (e.g performance of X characteristic) and ignore the others.
Why would it be meaningless? The comparison shows that fast.js works for what it was intended to do, it's doing stuff faster than it's native counterparts.
To say the comparison is meaningless is a little unfair.
Generally yes, the results are not expected to be 100% equivalent as these routines do not produce the same results for some inputs.
But as the range of inputs for which the results are expected to be the same is defined, the comparison is meaningful within those bounds (which covers quite a lot of the cases those functions will be thrown at).