This is exactly the same situation as with floating point numbers. In both cases you have a predetermined finite set of rational numbers, with exact rational arithmetic when you stay within those numbers, and deterministic rules when your operation exits this finite set.
> Using imprecise but efficient-to-calculate-with-numbers for exact literals by default is probably the most pervasive premature optimization in all of computing.
I do not see how this is a matter of precision nor efficiency. Floating point arithmetic was deemed more useful for general purpose for good reasons: the representable numbers are mostly scale-free, so that you do not care about the absolute size of your numbers (you can compute in armstrongs or in parsecs and obtain essentially the same results). With rational arithmetic using bounded integers, you cannot represent very large or very small numbers. On the other hand you can represent small fractions like 1/3, which is arguably useful in some cases, but not really a big deal in practice. There's no reason why rational arithmetic with bounded numerator and denominator could not be efficiently implemented in hardware as fast as floating point; I do not understand your point.
Neither Scheme nor Raku does this (Scheme uses unbound numerator and denominator in its default rational type, Raku has unbound numerator.)
> I do not see how this is a matter of precision
It's an issue of precision because exact rationals precisely represent the numbers expressed in decimal literals and provide exact (as opposed to approximate with floating point) arithmetic operations, though, to be fair, Raku’s choice of Rat instead of FatRat as the default has some warts in arithmetic.
1. It does not use a predetermined finite set of rational numbers. It uses a bigint numerator and a 64-bit denominator.
2. Even the case of bounded numerator and bounded denominator would preserve more of the nice mathematical properties of rational arithmetic than IEEE 754 floating-point arithmetic does (e.g. the associativity of addition).
3. While the latter could be implemented in hardware, it is not implemented in hardware. The defaults for literals prefer the choice that is implemented in hardware, IEEE 754 floating-point. This is a premature optimization.
This is rather ugly for it is not closed by inversion.
> 2. Even the case of bounded numerator and bounded denominator would preserve more of the nice mathematical properties of rational arithmetic than IEEE 754 floating-point arithmetic does (e.g. the associativity of addition).
I do not see how this can possibly be the case. How do you define rational arithmetic with bounded denominators so that addition is associative?
> 3. While the latter could be implemented in hardware, it is not implemented in hardware. The defaults for literals prefer the choice that is implemented in hardware, IEEE 754 floating-point. This is a premature optimization.
Alright, but this view is rather subjective, and only valid if you find rational arithmetic more natural than floating point (which many people do not). Regardless of efficiency, using rational arithmetic with bounded integers in numerical computing would be extremely unnatural to most analysts, they would always need to "normalize" the computations so that all the numbers do not become too small, and a lot of ugly tricks that are not needed in floating point.
Besides some trivial decimal arithmetic (that can be easily implemented in fixed point for the common use case of counting money), I do not really see the point of the rational representation with bounded ints. Of course, when the denominator is allowed to be a bigint, this is very useful in math, but you'll agree this is a completely different context.
There's only one mathematically natural choiceIn the case of bounded numerator and bounded denominator. You simply define (an/ad) + (bn/bd) = (bd × an + ad × bn) / (bd × ad) where the + on the right hand side is ordinary twos complement addition (and the equality test for an/ad and bn/bd is ad × bn == bd × an, which also naturally accounts for the zero-denominator-due-to-zero-divisors cases; alternatively, you can put everything in lowest terms, but there's no point).
> Regardless of efficiency, using rational arithmetic with bounded integers in numerical computing would be extremely unnatural to most analysts
Floating-point was invented by numerical analysts for numerical analysts. No wonder they find it most natural. The overwhelming majority software developers are not numerical analysts, and most programming languages do not target numerical analysts. Nobody says that numerical analysts should not use floats (except maybe the unum guy), we're arguing about defaults in languages that explicitly do not have numerical analysts among their core target audience.
> That's easy. In the case of bounded numerator and bounded denominator, you simply define (an/ad) + (bn/bd) = (bd x an + ad x bn) / (bd x ad)
This definition is not complete. What happens when "bd x ad" is larger than the maximum allowed denominator ?
> The overwhelming majority software developers are not numerical analysts,
the overwhelming majority of young people who learn to program today do machine learning, which is based on mungling huge arrays of floating point numbers. Tell them to use rationals if you dare!
All integer operations are twos complement operations, as I'm sure you guessed anyway.
> the overwhelming majority of young people who learn to program today do machine learning, which is based on mungling huge arrays of floating point numbers. Tell them to use rationals if you dare!
That's very far from factually true, but this kind of argument is not relevant to the Raku defaults in any case. A machine learning library can use whatever optimized number representation its creators wish to use. In fact, the default choice of most language implementors (double-precision floating point) is typically not the representation used in training or inference on deep learning models anyway. The 1080Ti is fast enough only with single-precision floats.
Sure, which is why Scheme is better than Raku here.
> this view is rather subjective, and only valid if you find rational arithmetic more natural than floating point
No, it is an objective fact that arbitrary decimal literals can be represented precisely, and precise operations performed with those representations, in a rational representation but not binary floating point, and that binary floating point trades that precise representation of expressed values and capability of precise operations with those values off for space and performance optimizations.
It does not preserve associativity of addition. If a = MAX_INT/1, b = 1/1, c = -1/1, then a + b is Infinity, and (a + b) + c is therefore also Infinity, while b + c is 0, so a + (b + c) is MAX_INT/1.
It does, if you use the mathematically natural addition operation (see my reply to enriquto). In the example above, the result is MAX_INT/1 in both cases.
Also, it's not clear to me that your proposed wraparound addition operator is actually fully associative in all possible cases of overflow.
Proof sketch: write out (an/ad + (bn/bd) + cn/cd)) and ((an/ad + bn/bd) + cn/cd) symbolically, and apply the fact that twos complement arithmetic is associative, commutative and distributive in all possible cases of overflow.
> An addition operator where adding two positive numbers results in a negative number can hardly be called "the mathematically natural" variant.
It's exactly how integer arithmetic works on every CPU designed after 1980, to general satisfaction. In any case, it's mathematically natural because it's exactly how you would construct the field of fractions of an integral domain.
I doubt that raku/perl6 uses these rules for rational arithmetic. It would be highly unintuitive to sum two rational numbers close to 0.5 and obtain a negative number. Do you have any reference for raku arithmetic rules? I cannot seem to find them.
Most other systems (including Scheme) that I've seen that use rational representation by default use arbitrary precision for both numerator and denominator, so don't have this concern.