How does that help? Besides they overflow to doubles if they get bigger than 2^31-1 anyway.
The problem with a naive average like this is that if you're averaging a bunch of big numbers there is a very high chance of overflow when you're summing, so even if all the numbers fit in 32 or 53 bits your calculation will be off.
If you're not averaging thousands of large numbers, why are you even using a package instead of nums.reduce((a,b)=>a+b) / nums.length anyway?
Source? I find this very hard to believe. 1. How does the engine know of it should be an integer or not? 2. Why would they have a special case for this, it won't benefit accuracy and treating them in a special way probably hurts performance more than just always keeping it as doubles.