I feel like it should really be emphasised that the reason this occurs is due to a mismatch between binary exponentiation and decimal exponentiation.
0.1 = 1 × 10^-1, but there is no integer significand s and integer exponent e such that 0.1 = s × 2^e.
When this issue comes up, people seem to often talk about fixing it by using decimal floats or fixed-point numbers (using some 10^x divisor). If you change the base, you solve the problem of representing 0.1, but whatever base you choose, you're going to have unrepresentable rationals. Base 2 fails to represent 1/10 just as base 10 fails to represent 1/3. All you're doing by using something based around the number 10 is supporting numbers that we expect to be able to write on paper, not solving some fundamental issue of number representation.
Also, binary-coded decimal is irrelevant. The thing you're wanting to change is which base is used, not how any integers are represented in memory.
Agree. All of these floating point quirks are not actually problems if you think of them as being finite precision approximations to real numbers, not in any particular base. Just like physical measurements of continuous quantities. You wouldn't be surprised to find an error in the 15th significant figure of some measurement or attempt to compare them for equality or whatever. So don't do it with floating point numbers either and everything will work perfectly.
Yes, there are some exceptions where you can reliably compare equality or get exact decimal values or whatever, but those are kind of hacks that you can only take advantage of by breaking the abstraction.
If you only use decimals in your application, it actually is a fix because you can store the numbers you care about in exact precision. Of course it's not really a fix if you're being pedantic but for a lot of simple UI stuff it's good enough.
0.1 = 1 × 10^-1, but there is no integer significand s and integer exponent e such that 0.1 = s × 2^e.
When this issue comes up, people seem to often talk about fixing it by using decimal floats or fixed-point numbers (using some 10^x divisor). If you change the base, you solve the problem of representing 0.1, but whatever base you choose, you're going to have unrepresentable rationals. Base 2 fails to represent 1/10 just as base 10 fails to represent 1/3. All you're doing by using something based around the number 10 is supporting numbers that we expect to be able to write on paper, not solving some fundamental issue of number representation.
Also, binary-coded decimal is irrelevant. The thing you're wanting to change is which base is used, not how any integers are represented in memory.