let a = new Uint32Array(3);
a = 5;
a = 2;
a = a/a
It also has a full set of 32 bit operations - “64 bit” used to be “big int” and required software implementations.
The choice to only support floating point numbers also makes sense - you have a dynamic language with no type annotations, so you can’t distinguish between 1(an int) and 1(not an int), and as a result you can’t determine whether a given arithmetic operation should be the integer version or the floating point one. So floating point makes the most sense, unless you want arbitrary precision floating point, which just isn’t feasible for performance or sanity reasons. Especially a few decades ago.
So we get bigint- or more correctly arbitrary precision int. That has a distinctly different use case from regular arithmetic numbers, and so is always a distinct type.
So yeah, going floating point, and then bigint is a perfectly reasonable evolution of the language.
As an aside, I don’t think it’s first because I believe python uses arbitrary precision ints, or floating point. Haskell defaults to arbitrary precision as well (although it supports fixed precision ints as well).
For purposes of doing silly things in browsers, sure. But then someone wants to count money with it, and you suddenly have articles on HN about it, and half of their points can be summarized by "watch out for floating point errors". In the rest of industry, the rule of thumb is, "don't use floats for money".
> The choice to only support floating point numbers also makes sense - you have a dynamic language with no type annotations, so you can’t distinguish between 1(an int) and 1(not an int), and as a result you can’t determine whether a given arithmetic operation should be the integer version or the floating point one.
No, it doesn't (at least not for those reasons). Dynamic languages aren't untyped; in those languages, it's the values that have types. I may not know when reading the code whether x has a float value or string value, but I can query for that at runtime. The way to tell whether or not 1 is an int or a float should be by means of functions like is_int() or is_float(). Sane programming languages handle this fine (hint: writing 1 usually means you want an int, writing 1.0 suggests floating point). Hell, even PHP can handle this fine.
At this point I hope that whoever implements bigints in JS realizes that "small" bigints like 123n can be implemented as actual machine integers for much performance gain, and this way we'll get fixnums through a back door.
I agree that pure 32 and 64 bit integers would be nice, though. Especially since I have need for 64 bit bitwise integer math. Bigint surprisingly seems to be able to do that, I'm just not sure if the performance will be okay.
But being able to use bigints with binary literals is awesome: 123n & 0b1111n => 11n