In most applications, using a IEEE float is fine. It's a trade-off where a loss in accuracy is acceptable. Lots of digital things (images for instance) are discrete approximations of continuous phenomenon. The question is the degree of inaccuracy that is tolerated and the sorts of things you have to do to prevent it from propagating (which should be familiar to anyone who has to do math and work with finite precision floating point numbers).

 floating points have well defined rules and have perfectly accurate calculations, they are only "inaccurate" when used as a computer representation/approximation of real numbers. However, they still do not exhibit any randomness (indeterminacy), are usually not a cause of strange, hard-to-reproduce errors (that concurrent, non-sequential memory instructions often cause).
 It depends on if you are using "accurate" in the STEM jargon sense or if you are using it in the common parlance. In jargon, there is a difference between accuracy and precision. http://en.wikipedia.org/wiki/Accuracy_and_precisionFloats are inaccurate (sometimes far from real,) but precise (always yield the same output given the same input.)Edit: reading the article, it calls out a special (different) meaning of the use of precise in the IEEE float spec> In the case of full reproducibility, such as when rounding a number to a representable floating point number, the word precision has a meaning not related to reproducibility. For example, in the IEEE 754-2008 standard it means the number of bits in the significand, so it is used as a measure for the relative accuracy with which an arbitrary number can be represented.
 Most "accuracy" issues that crop up with floats have nothing to do with either accuracy or precision, but simply with wrong expectations about decimal behaviour.

Search: