Hacker News new | past | comments | ask | show | jobs | submit login

FWIW and for all its other shortcomings, Raku (nee Perl 6) handles common arithmetic in a manner that wouldn't surprise a mathematician. Or non-mathematician, for that matter. Witness:

    $ python3 -c 'print(0.3 == 0.1 + 0.2)'
    False
    
    $ ruby -e 'puts 0.3 == 0.1 + 0.2'
    false
    
    $ perl6 -e 'say 0.3 == 0.1 + 0.2'
    True



As a mathematician, I am actually very surprised by perl's 6 behavior here. What the hell is going on? Does it use fixed point arithmetic or what?


It uses exact rational arithmetic for values that it can represent that way, preferring correctness to efficiency. Scheme and some other languages do the same thing.

Using imprecise but efficient-to-calculate-with-numbers for exact literals by default is probably the most pervasive premature optimization in all of computing.


> It uses exact rational arithmetic for values that it can represent that way, preferring correctness to efficiency.

This is exactly the same situation as with floating point numbers. In both cases you have a predetermined finite set of rational numbers, with exact rational arithmetic when you stay within those numbers, and deterministic rules when your operation exits this finite set.

> Using imprecise but efficient-to-calculate-with-numbers for exact literals by default is probably the most pervasive premature optimization in all of computing.

I do not see how this is a matter of precision nor efficiency. Floating point arithmetic was deemed more useful for general purpose for good reasons: the representable numbers are mostly scale-free, so that you do not care about the absolute size of your numbers (you can compute in armstrongs or in parsecs and obtain essentially the same results). With rational arithmetic using bounded integers, you cannot represent very large or very small numbers. On the other hand you can represent small fractions like 1/3, which is arguably useful in some cases, but not really a big deal in practice. There's no reason why rational arithmetic with bounded numerator and denominator could not be efficiently implemented in hardware as fast as floating point; I do not understand your point.


> In both cases you have a predetermined finite set of rational numbers

Neither Scheme nor Raku does this (Scheme uses unbound numerator and denominator in its default rational type, Raku has unbound numerator.)

> I do not see how this is a matter of precision

It's an issue of precision because exact rationals precisely represent the numbers expressed in decimal literals and provide exact (as opposed to approximate with floating point) arithmetic operations, though, to be fair, Raku’s choice of Rat instead of FatRat as the default has some warts in arithmetic.


It is not at all the same situation as with floating point numbers.

1. It does not use a predetermined finite set of rational numbers. It uses a bigint numerator and a 64-bit denominator.

2. Even the case of bounded numerator and bounded denominator would preserve more of the nice mathematical properties of rational arithmetic than IEEE 754 floating-point arithmetic does (e.g. the associativity of addition).

3. While the latter could be implemented in hardware, it is not implemented in hardware. The defaults for literals prefer the choice that is implemented in hardware, IEEE 754 floating-point. This is a premature optimization.


> 1. It does not use a predetermined finite set of rational numbers. It uses a bigint numerator and a 64-bit denominator.

This is rather ugly for it is not closed by inversion.

> 2. Even the case of bounded numerator and bounded denominator would preserve more of the nice mathematical properties of rational arithmetic than IEEE 754 floating-point arithmetic does (e.g. the associativity of addition).

I do not see how this can possibly be the case. How do you define rational arithmetic with bounded denominators so that addition is associative?

> 3. While the latter could be implemented in hardware, it is not implemented in hardware. The defaults for literals prefer the choice that is implemented in hardware, IEEE 754 floating-point. This is a premature optimization.

Alright, but this view is rather subjective, and only valid if you find rational arithmetic more natural than floating point (which many people do not). Regardless of efficiency, using rational arithmetic with bounded integers in numerical computing would be extremely unnatural to most analysts, they would always need to "normalize" the computations so that all the numbers do not become too small, and a lot of ugly tricks that are not needed in floating point.

Besides some trivial decimal arithmetic (that can be easily implemented in fixed point for the common use case of counting money), I do not really see the point of the rational representation with bounded ints. Of course, when the denominator is allowed to be a bigint, this is very useful in math, but you'll agree this is a completely different context.


> I do not see how this can possibly be the case. How do you define rational arithmetic with bounded denominators so that addition is associative?

There's only one mathematically natural choiceIn the case of bounded numerator and bounded denominator. You simply define (an/ad) + (bn/bd) = (bd × an + ad × bn) / (bd × ad) where the + on the right hand side is ordinary twos complement addition (and the equality test for an/ad and bn/bd is ad × bn == bd × an, which also naturally accounts for the zero-denominator-due-to-zero-divisors cases; alternatively, you can put everything in lowest terms, but there's no point).

> Regardless of efficiency, using rational arithmetic with bounded integers in numerical computing would be extremely unnatural to most analysts

Floating-point was invented by numerical analysts for numerical analysts. No wonder they find it most natural. The overwhelming majority software developers are not numerical analysts, and most programming languages do not target numerical analysts. Nobody says that numerical analysts should not use floats (except maybe the unum guy), we're arguing about defaults in languages that explicitly do not have numerical analysts among their core target audience.


>> I do not see how this can possibly be the case. How do you define rational arithmetic with bounded denominators so that addition is associative?

> That's easy. In the case of bounded numerator and bounded denominator, you simply define (an/ad) + (bn/bd) = (bd x an + ad x bn) / (bd x ad)

This definition is not complete. What happens when "bd x ad" is larger than the maximum allowed denominator ?

> The overwhelming majority software developers are not numerical analysts,

the overwhelming majority of young people who learn to program today do machine learning, which is based on mungling huge arrays of floating point numbers. Tell them to use rationals if you dare!


> This definition is not complete. What happens when "bd x ad" is larger than the maximum allowed denominator ?

All integer operations are twos complement operations, as I'm sure you guessed anyway.

> the overwhelming majority of young people who learn to program today do machine learning, which is based on mungling huge arrays of floating point numbers. Tell them to use rationals if you dare!

That's very far from factually true, but this kind of argument is not relevant to the Raku defaults in any case. A machine learning library can use whatever optimized number representation its creators wish to use. In fact, the default choice of most language implementors (double-precision floating point) is typically not the representation used in training or inference on deep learning models anyway. The 1080Ti is fast enough only with single-precision floats.


> This is rather ugly for it is not closed by inversion.

Sure, which is why Scheme is better than Raku here.

> this view is rather subjective, and only valid if you find rational arithmetic more natural than floating point

No, it is an objective fact that arbitrary decimal literals can be represented precisely, and precise operations performed with those representations, in a rational representation but not binary floating point, and that binary floating point trades that precise representation of expressed values and capability of precise operations with those values off for space and performance optimizations.


> 2. Even the case of bounded numerator and bounded denominator would preserve more of the nice mathematical properties of rational arithmetic than IEEE 754 floating-point arithmetic does (e.g. the associativity of addition).

It does not preserve associativity of addition. If a = MAX_INT/1, b = 1/1, c = -1/1, then a + b is Infinity, and (a + b) + c is therefore also Infinity, while b + c is 0, so a + (b + c) is MAX_INT/1.


> It does not preserve associativity of addition.

It does, if you use the mathematically natural addition operation (see my reply to enriquto). In the example above, the result is MAX_INT/1 in both cases.


An addition operator where adding two positive numbers results in a negative number can hardly be called "the mathematically natural" variant. Saturation arithmetic (which is the norm for floating point, and even most fixed point) is pretty obviously not associative.

Also, it's not clear to me that your proposed wraparound addition operator is actually fully associative in all possible cases of overflow.


> Also, it's not clear to me that your proposed wraparound addition operator is actually fully associative in all possible cases of overflow.

Proof sketch: write out (an/ad + (bn/bd) + cn/cd)) and ((an/ad + bn/bd) + cn/cd) symbolically, and apply the fact that twos complement arithmetic is associative, commutative and distributive in all possible cases of overflow.

> An addition operator where adding two positive numbers results in a negative number can hardly be called "the mathematically natural" variant.

It's exactly how integer arithmetic works on every CPU designed after 1980, to general satisfaction. In any case, it's mathematically natural because it's exactly how you would construct the field of fractions of an integral domain.


> write out (an/ad + (bn/bd) + cn/cd)) and ((an/ad + bn/bd) + cn/cd) symbolically, and apply the fact that twos complement arithmetic is associative, commutative and distributive in all possible cases of overflow.

I doubt that raku/perl6 uses these rules for rational arithmetic. It would be highly unintuitive to sum two rational numbers close to 0.5 and obtain a negative number. Do you have any reference for raku arithmetic rules? I cannot seem to find them.


Raku Rat falls through to floats if the denominator overflows, and has arbitrary precision numerators, it does not use twos-complement math for normal operations in any way exposed to the user, AFAIK.

Most other systems (including Scheme) that I've seen that use rational representation by default use arbitrary precision for both numerator and denominator, so don't have this concern.


Your proof fails for the odd man out negative integer value.


It does not. Feel free to justify your claim by writing down an explicit counterexample, though.


Even if you allow bigint numerators, addition is still not associative. You can similarly build an example 1/(M-1) - 1/M ...


"bounded integers" not entirely sure what you mean. But the integers are big integers that can take up the entirety of your computer memory to represent a single rational number if you so wish.

That's a really bad choice and it will result in surprisingly unpredictable results because terminating divisibility by 10 is much much more complicated than by a pure power of a prime.

I imagine whole classes of unexpected problems like timing side channel attacks, financial types expecting it to work better and missing easily overlooked corner cases...

How do I know this? I spent two years investigating exact versus inexact arithmetic in the context of computational science.

>It uses exact rational arithmetic for values that it can represent that way, preferring correctness to efficiency

Fwiw, floats do exactly this the only difference is that you've changed your base from 2 to 10 and introduced unnecessary computation for what is maybe a 10% increase in exactly computable values.


> Fwiw, floats do exactly this the only difference is that you've changed your base from 2 to 10

No, exact rationals aren't equivalent to decimal floating point, which while not as good as arbitrary precision rational or decimal representation is still better than binary floating point for decimal literals. In arbitrary precision decimal or rational representation, or even Raku's bounded denominator rationals, if p and q are decimal literals that can be exactly represented in the representation, then so are p + q and p - q. For the arbitrary precision versions, subject to available memory, so are p × q and p ÷ q. Fixed-size floating point (binary or decimal) provides none of those precision guarantees (whether for decimal or binary literals).


They have this totally irrational idea of having "rational" numbers.

https://docs.perl6.org/language/numerics#Rational

By irrational I am not joking, these sorts of number systems are complicated and glitchy, in this case with program-destroying magic like autoconversion to floating point when you hit a 2^64 denominator, and, don't click this URL, please, just read it:

https://docs.perl6.org/language/numerics#Zero-denominator_ra...


> totally irrational idea

Could you expand on why it is a totally irrational idea? Seems pretty rational to me. But then I'm not really a mathematician.

> autoconversion to floating point when you hit a 2^64 denominator

Please note that this is after normalization. And it has come from practicality: having the denominator also be an BigInt, slows down the use of the Rational number significantly. However, if you do need that type of precision, then you can by using infectious FatRats: any expression with a FatRat, will result in a FatRat (unless explicitly coerced to something else, of course).

With regards to the Zero-denominator Rats: they are just special cases just as IEEE has special cases for -Inf, Inf and NaN. So what's the problem there?


It adds gratuitous corner cases to the language.


FWIW, Perl 6 prefers to be called Raku now.


For your amusement, Go:

    fmt.Print(0.3 == 0.1 + 0.2)
    //   => true
https://repl.it/repls/SubtleQuarrelsomeMaintenance


Interesting. I thought Go used standard IEEE-754 floats.

Why does it have this result?


They evaluate constant expressions exactly at compile time, then convert to float.


That works great until people complain that it works differently at compile time, and they have a point.


Yeah. On the other hand, they won’t get any surprises when cross-compiling. Edit: Okay, maybe Go devs saved themselves the work of having floating point emulation. Edit: No, they are folding both constant expressions exactly and actual floating point operations, going by assembly output.


> No, they are folding both constant expressions exactly and actual floating point operations, going by assembly output.

Yes. The behaviour is quite odd and unexpected. It seems they have different rules for when to apply the constant version of floating math than what constitutes actual compile time evaluation.

  const a float64 = 0.1;
  const b float64 = 0.2;
  const x float64 = 0.1 + 0.2; // Const evaluation, "rational" math
  const y float64 = a + b;     // Const evaluation, IEEE-754 math
  
  fmt.Println(x == y)          // false ??!?
This seems quite a horrible approach.

EDIT: Or how about this one:

  const x float64 = 0;
  fmt.Println(x + 0.1 + 0.2 == 0.1 + 0.2 + x );    // false!
  fmt.Println(x + 0.1 + 0.2 == x + (0.1 +  0.2) ); // false!


The literal folding is specified language behavior, the other is just an optimization.


GCC at least evaluates constant FP expressions using a model of the target machine FP unit, so that compile time and runtime expressions behave identically.


Go uses IEEE-754. The behaviour of these may be odd for those new to the quirks of the float, but I thought the behaviour is nowadays standardized and reproducable across platforms.


Raku: Here's a trivial example that shows a sophisticated approach to getting you the precise numbers you want for all your numeric needs.

Go: Here's a trivial example.


Arguable that this is a good thing.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: