Hacker News new | past | comments | ask | show | jobs | submit login

That's not true. In most javascript implementations, integers will be represented by real 32 bit integers.



How does that help? Besides they overflow to doubles if they get bigger than 2^31-1 anyway.

The problem with a naive average like this is that if you're averaging a bunch of big numbers there is a very high chance of overflow when you're summing, so even if all the numbers fit in 32 or 53 bits your calculation will be off.

If you're not averaging thousands of large numbers, why are you even using a package instead of nums.reduce((a,b)=>a+b) / nums.length anyway?


How does storing integers in 32-bit values help the issue of integers being truncated to 53 bits?


53 bits that cannot overflow (it only becomes less accurate) is not enough but 64 bits are, even with the risk of overflow?


Who said anything about 64 bits?


The complaint was that JavaScript doesn't have 64-bit integers but only 53.


Clearly 2^32+2^32 < 2*53, so there is no problem ;-)


Source? I find this very hard to believe. 1. How does the engine know of it should be an integer or not? 2. Why would they have a special case for this, it won't benefit accuracy and treating them in a special way probably hurts performance more than just always keeping it as doubles.


As long as everything used is really an integer and that everyone you work with is careful to keep things as ints (oring with 0 and such)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: