Hacker News new | comments | show | ask | jobs | submit login

Not trying to be too snarky, but does that mean that JavaScript is going to be the first language that gets bigints before getting actual integers? How is that a reasonable sequence of steps in language evolution?



> before getting actual integers

Javascript has always had "actual integers". As long as you stay within Number.MIN_SAFE_INTEGER...Number.MAX_SAFE_INTEGER (-9007199254740991..9007199254740991) the internal representation is an integer representation.

http://2ality.com/2013/10/safe-integers.html


To get the operators to actually work correctly you need to use typed arrays however. For example

let a = new Uint32Array(3);

a[0] = 5;

a[1] = 2;

a[2] = a[0]/a[1]


How is a bigint not an actual integer? If you mean restricting it to a certain number of bits - there are many other languages avoid exposing the low-level system details like that. Python and Ruby are two easy examples, and I'm sure there are many more.


Having an abstraction that transparently switches between fixnums and bignums is fine. This is what Ruby or Python or Lisp languages do. And the abstraction is purposefully a thin one - no one is really hiding that there's fixed-bits integers underneath, because this distinction has huge performance implications.


That isn’t fair - JavaScript has numbers, that are already used on billions of sites, so that can’t be broken. An important part of evolving a language that is always transmitted as source code is that you can’t break the existing code. Python tried with python 3 - that attempted break now means all systems have python 2 and python 3 installed, and there are libraries that can’t be used together because they have different language versions. C and C++ can change the language in breaking (-ish) ways, because changing the language syntax or semantics doesn’t effect shipping (eg compiled) programs.

It also has a full set of 32 bit operations - “64 bit” used to be “big int” and required software implementations.

The choice to only support floating point numbers also makes sense - you have a dynamic language with no type annotations, so you can’t distinguish between 1(an int) and 1(not an int), and as a result you can’t determine whether a given arithmetic operation should be the integer version or the floating point one. So floating point makes the most sense, unless you want arbitrary precision floating point, which just isn’t feasible for performance or sanity reasons. Especially a few decades ago.

So we get bigint- or more correctly arbitrary precision int. That has a distinctly different use case from regular arithmetic numbers, and so is always a distinct type.

So yeah, going floating point, and then bigint is a perfectly reasonable evolution of the language.

As an aside, I don’t think it’s first because I believe python uses arbitrary precision ints, or floating point. Haskell defaults to arbitrary precision as well (although it supports fixed precision ints as well).


> JavaScript has numbers, that are already used on billions of sites, so that can’t be broken.

For purposes of doing silly things in browsers, sure. But then someone wants to count money with it, and you suddenly have articles on HN about it, and half of their points can be summarized by "watch out for floating point errors". In the rest of industry, the rule of thumb is, "don't use floats for money".

> The choice to only support floating point numbers also makes sense - you have a dynamic language with no type annotations, so you can’t distinguish between 1(an int) and 1(not an int), and as a result you can’t determine whether a given arithmetic operation should be the integer version or the floating point one.

No, it doesn't (at least not for those reasons). Dynamic languages aren't untyped; in those languages, it's the values that have types. I may not know when reading the code whether x has a float value or string value, but I can query for that at runtime. The way to tell whether or not 1 is an int or a float should be by means of functions like is_int() or is_float(). Sane programming languages handle this fine (hint: writing 1 usually means you want an int, writing 1.0 suggests floating point). Hell, even PHP can handle this fine.

At this point I hope that whoever implements bigints in JS realizes that "small" bigints like 123n can be implemented as actual machine integers for much performance gain, and this way we'll get fixnums through a back door.


Javascript number type can be used like 32 bit integers for the most part, including bitwise operators and modulus.

I agree that pure 32 and 64 bit integers would be nice, though. Especially since I have need for 64 bit bitwise integer math. Bigint surprisingly seems to be able to do that, I'm just not sure if the performance will be okay.

But being able to use bigints with binary literals is awesome: 123n & 0b1111n => 11n




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: