Hacker News new | comments | show | ask | jobs | submit login

floating points have well defined rules and have perfectly accurate calculations, they are only "inaccurate" when used as a computer representation/approximation of real numbers. However, they still do not exhibit any randomness (indeterminacy), are usually not a cause of strange, hard-to-reproduce errors (that concurrent, non-sequential memory instructions often cause).

It depends on if you are using "accurate" in the STEM jargon sense or if you are using it in the common parlance. In jargon, there is a difference between accuracy and precision. http://en.wikipedia.org/wiki/Accuracy_and_precision

Floats are inaccurate (sometimes far from real,) but precise (always yield the same output given the same input.)

Edit: reading the article, it calls out a special (different) meaning of the use of precise in the IEEE float spec

> In the case of full reproducibility, such as when rounding a number to a representable floating point number, the word precision has a meaning not related to reproducibility. For example, in the IEEE 754-2008 standard it means the number of bits in the significand, so it is used as a measure for the relative accuracy with which an arbitrary number can be represented.


Most "accuracy" issues that crop up with floats have nothing to do with either accuracy or precision, but simply with wrong expectations about decimal behaviour.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact