Hacker News new | past | comments | ask | show | jobs | submit login

> 1. It does not use a predetermined finite set of rational numbers. It uses a bigint numerator and a 64-bit denominator.

This is rather ugly for it is not closed by inversion.

> 2. Even the case of bounded numerator and bounded denominator would preserve more of the nice mathematical properties of rational arithmetic than IEEE 754 floating-point arithmetic does (e.g. the associativity of addition).

I do not see how this can possibly be the case. How do you define rational arithmetic with bounded denominators so that addition is associative?

> 3. While the latter could be implemented in hardware, it is not implemented in hardware. The defaults for literals prefer the choice that is implemented in hardware, IEEE 754 floating-point. This is a premature optimization.

Alright, but this view is rather subjective, and only valid if you find rational arithmetic more natural than floating point (which many people do not). Regardless of efficiency, using rational arithmetic with bounded integers in numerical computing would be extremely unnatural to most analysts, they would always need to "normalize" the computations so that all the numbers do not become too small, and a lot of ugly tricks that are not needed in floating point.

Besides some trivial decimal arithmetic (that can be easily implemented in fixed point for the common use case of counting money), I do not really see the point of the rational representation with bounded ints. Of course, when the denominator is allowed to be a bigint, this is very useful in math, but you'll agree this is a completely different context.




> I do not see how this can possibly be the case. How do you define rational arithmetic with bounded denominators so that addition is associative?

There's only one mathematically natural choiceIn the case of bounded numerator and bounded denominator. You simply define (an/ad) + (bn/bd) = (bd × an + ad × bn) / (bd × ad) where the + on the right hand side is ordinary twos complement addition (and the equality test for an/ad and bn/bd is ad × bn == bd × an, which also naturally accounts for the zero-denominator-due-to-zero-divisors cases; alternatively, you can put everything in lowest terms, but there's no point).

> Regardless of efficiency, using rational arithmetic with bounded integers in numerical computing would be extremely unnatural to most analysts

Floating-point was invented by numerical analysts for numerical analysts. No wonder they find it most natural. The overwhelming majority software developers are not numerical analysts, and most programming languages do not target numerical analysts. Nobody says that numerical analysts should not use floats (except maybe the unum guy), we're arguing about defaults in languages that explicitly do not have numerical analysts among their core target audience.


>> I do not see how this can possibly be the case. How do you define rational arithmetic with bounded denominators so that addition is associative?

> That's easy. In the case of bounded numerator and bounded denominator, you simply define (an/ad) + (bn/bd) = (bd x an + ad x bn) / (bd x ad)

This definition is not complete. What happens when "bd x ad" is larger than the maximum allowed denominator ?

> The overwhelming majority software developers are not numerical analysts,

the overwhelming majority of young people who learn to program today do machine learning, which is based on mungling huge arrays of floating point numbers. Tell them to use rationals if you dare!


> This definition is not complete. What happens when "bd x ad" is larger than the maximum allowed denominator ?

All integer operations are twos complement operations, as I'm sure you guessed anyway.

> the overwhelming majority of young people who learn to program today do machine learning, which is based on mungling huge arrays of floating point numbers. Tell them to use rationals if you dare!

That's very far from factually true, but this kind of argument is not relevant to the Raku defaults in any case. A machine learning library can use whatever optimized number representation its creators wish to use. In fact, the default choice of most language implementors (double-precision floating point) is typically not the representation used in training or inference on deep learning models anyway. The 1080Ti is fast enough only with single-precision floats.


> This is rather ugly for it is not closed by inversion.

Sure, which is why Scheme is better than Raku here.

> this view is rather subjective, and only valid if you find rational arithmetic more natural than floating point

No, it is an objective fact that arbitrary decimal literals can be represented precisely, and precise operations performed with those representations, in a rational representation but not binary floating point, and that binary floating point trades that precise representation of expressed values and capability of precise operations with those values off for space and performance optimizations.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: