I’d phrase it as, “It’s not _just_ about floating point artefacts.” Rounding is a source of error inherent to storing arbitrary precision numbers in finite space, floating point numbers are just a very common way to do that. Numerical stability is basically about information loss, typically when adding a very small number to a very large number causing information about the small number to be rounded out.
It happens with ints too, but it is so much more obvious than with floats that no one would be surprised.
Repeating i += 0.1 a million times will produce the same value as i += 0.2 a million times if i is an int, since the rounding will make both effectively i += 0.
It’s something more fundamental. Similar issues happens elsewhere in numerical computation. See https://en.m.wikipedia.org/wiki/Numerical_stability