> That's not really a JS problem, that's a floating point problem
More accurately it's a binary problem. 0.1 and 0.3 have non-terminating representations in binary, so it's completely irrelevant whether you're using fixed or floating point.
Any number that can be represented as the sum of powers of 2 and 5 have a terminating decimal representation, whereas only numbers that can be represented as the sum of powers of 2 have a terminating representation in binary. The latter is clearly a subset of the former, so it seems obvious that we should be using decimal types by default in our programming languages.
While true, there will be rational numbers you can't represent as floating-point numbers no matter which base you choose. And the moment you start calculating with inexactly represented numbers, there is a risk that the errors might multiply and the result of your computation will be incredibly wrong. This is the much bigger "problem" of floats, not the fact that 0.3 is not "technically" 0.3, but off by some minuscule number.
It's not a binary problem, it's a particular binary representation problem. You can represent 0.1 and 0.3 such that it terminates. In Java for example just use BigDecimal (integer unscaled value + integer scale) and you are ok.
The floating point standard defines various float types, the common float/double types are called binary32 and binary64. It also defines decimal types.
> integer unscaled value + integer scale
binary float does the same, just using 2^exp instead of 10^exp for scale.
In pure mathematics, you can get perfect precision with non-terminating fractions. For example, 0.(6) + 0.(3) = 1 is true. The decimal (or binary) representation is just "syntax sugar" for the actual fraction - in this case, 2/3 + 1/3 = 1; or, if you prefer, 10/11 + 1/11 = 1, or 0.(10) + 0.(01) = 1.
Note: I'm using a notation for infinitely repeating decimals that I learned in school - 0.(6) means 0.6666666...; 0.(01) means 0.010101010101...
Floating bar numbers are an interesting way of giving terminating representations to more commonly used decimals. Each number is essentially a numerator and denominator pair, with some bits to indicate the position of the division bar separating them.
No, it is problem for any base.
For example decimal system can represent 1/5,1/4, 1/8 and 1/2 properly. But, what about 1/3, 1/7, 1/6, 1/9 as decimal numbers with finite number of digits.
This will be a problem for any base representation when it has to be boxed in finite number of digits or memory.
One good thing is decimal is widely used format, so it is good to go with that for representing stuff. But, it is more of an accidental advantage that decimal has. Nothing more.
Did not read the whole parent comment and my message is redundant. But, yes, objectively decimal can represent more numbers (the numbers that are composed of 1/2 and 1/5).
I think there are arguments to use decimal as representation is there, (where I originally came to know about this problem) [1].
More accurately it's a binary problem. 0.1 and 0.3 have non-terminating representations in binary, so it's completely irrelevant whether you're using fixed or floating point.
Any number that can be represented as the sum of powers of 2 and 5 have a terminating decimal representation, whereas only numbers that can be represented as the sum of powers of 2 have a terminating representation in binary. The latter is clearly a subset of the former, so it seems obvious that we should be using decimal types by default in our programming languages.
Oh well.