I imagine whole classes of unexpected problems like timing side channel attacks, financial types expecting it to work better and missing easily overlooked corner cases...
How do I know this? I spent two years investigating exact versus inexact arithmetic in the context of computational science.
>It uses exact rational arithmetic for values that it can represent that way, preferring correctness to efficiency
Fwiw, floats do exactly this the only difference is that you've changed your base from 2 to 10 and introduced unnecessary computation for what is maybe a 10% increase in exactly computable values.
No, exact rationals aren't equivalent to decimal floating point, which while not as good as arbitrary precision rational or decimal representation is still better than binary floating point for decimal literals. In arbitrary precision decimal or rational representation, or even Raku's bounded denominator rationals, if p and q are decimal literals that can be exactly represented in the representation, then so are p + q and p - q. For the arbitrary precision versions, subject to available memory, so are p × q and p ÷ q. Fixed-size floating point (binary or decimal) provides none of those precision guarantees (whether for decimal or binary literals).