Hacker News new | past | comments | ask | show | jobs | submit login

> When people talk about non-determinism of floating point, what they usually mean is non-associativity, that is (x+y)+z may not be exactly equal to x+(y+z).

Good example of this, in Python 3:

    >>> (0.1 + 0.2) + 0.3
    >>> 0.1 + (0.2 + 0.3)

Every single time you run those two statements, you’ll get the same result. Yes they're non-associative. But that's specified and documented. That's not the same thing as non-deterministic in any way.

Yet, in accounting, you are expected to be able to sum a set of numbers in different ways and still get the same result

Yes, sorry, I was just intending to highlight non-associativity :) I agree it's not "non-deterministic".

The same code might be optimised in different ways by different compilers, though (or the same compiler with different flags). This might lead to different results for the same code. In that sense, it's non-deterministic.

> The same code might be optimised in different ways by different compilers, though

It's not an optimisation if it changes the result! And if you use non-standard flags that's your problem.

What is and what is not optimization and what changes are allowed or not depends on the application.

MP3 is an optimization of WAV, yet it changes the result.

Some applications are ok with reducing precision of calculations because they are not sensitive enough to small inaccuracies or they take effort to control inaccuarcies.

For example, graphics applications are typically heavy in FP calculations and yet they tend to not care much about precision and much more about performance. For those applications reducing accuracy for slight performance increase is likely win.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact