Hacker News new | past | comments | ask | show | jobs | submit login

But rationals are more expensive to compute with (compared to floating-point; this is another example of the trade-off between performance and accuracy.)





it's also a range-storage trade-off. if you use two fixed width integers to represent a rational, the minimum and maximum values are the same as that of the integer type. floating point gives a far wider range for the same number of bits.

I'm sure there's some subtlety I'm missing, but isn't it actually the same trade-off? A 64-bit float can only represent 52-bit integers exactly. Anything above that, and you don't even have integer-level precision on the number anymore... This sliding scale of precision is exactly why floats are terrible at the kinds of operations that would cause you to use a rational instead.

> I'm sure there's some subtlety I'm missing, but isn't it actually the same trade-off?

not exactly, unless you consider space efficiency to be an aspect of performance (which is certainly reasonable). a naive implementation of rationals using two int32_t's only covers the range of a single int32_t, despite using as many bits as the double. it's also a trade-off between range and consistent precision, of course.

this certainly isn't some deep insight into number representation, just a quick point for the benefit of people who haven't thought much about rational data types before.


Once you care about that level of performance, you can surely optimise your representation to have a greater range (use more bits for the numerator) or greater precision (more bits for the denominator) or some boutique solution like using three integers to store the number a + b/c.

You can store slightly fewer numbers with rationals, because it's hard to avoid having a representation for both 2/4 and 3/6. But the loss of range or precision due to that is pretty small.


it's not just that they are expensive, it's that there is a nondetermistic compute time.

Let's say we need to do a comparison. Set

    a = 34241432415/344425151233 
and

    b = 45034983295/453218433828
Which is greater?

Or even more feindish, Set

    a = 14488683657616/14488641242046
and

    b = 10733594563328/10733563140768
which is greater?

By what algorithm would you do the computation, and could you guarantee me the same compute time as comparing 2/3 and 4/5?


I’m not sure I follow. Isn’t it just two integer multiplications followed by a comparison.

a/b > x/y is the same as ay > xb

Assuming you don’t overflow your integer type.


> Assuming you don’t overflow your integer type.

There's your answer :)

It's far too easy to overflow your integer type by simply adding a bunch of rationals whose denominators happen to be coprime, or just by multiplying rationals. For this reason, the vast majority of rational implementations use arbitrary precision integers, and of course arithmetic on those isn't constant time.


One approach would be to hold on to rationals for as long as possible, to eliminate drift, and then dump them out to the nearest floating-point at the very last moment

IEEE floats are pretty complicated, but today’s CPUs have dedicated support for those and not for rationals, so we use them where we probably shouldn’t.

IEEE floats are absolutely great for many applications where rationals would be overkill or even inappropriate. A videogame doesn't care if the result is 0.3 or 0.30000000000000004. Even some scientific applications can use floats if the coder knows what they're doing.

The problem is devs who don't understand what they're doing and just think that they can use floats in every situation and it'll just work out fine. This is not helped by many popular scripting languages who just default to floats when a result doesn't fit an integer (something more reasonable languages like Common Lisp don't do for instance).

For instance, to speak of videogames again, very tight precision isn't usually an issue but loss of granularity when numbers get very big can cause problems, especially if you have very large levels. That being said rationals wouldn't really help you here, you'd have the same problem except now you have to keep two numbers within bound instead of one. Imagine having a very small offset in a complex operation and ending up with a number like 100000000000000000000000000/100000000000000000000000001 !


I love this demonstration of that phenomenon: https://twitter.com/schteppe/status/1143111757751357440?s=20

Why 'even some scientific applications'? Don't nearly all scientific applications use floats?

Yeah, I assumed as much. I wasn't really thinking about that at the time, but knowing now that it exists in the wild, the only conceivable reason for it not to be used everywhere would be some kind of performance penalty.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: