Hacker News new | comments | ask | show | jobs | submit login
What Every Computer Scientist Should Know About Floating-Point Arithmetic (sun.com)
53 points by kqr2 on Dec 8, 2010 | hide | past | web | favorite | 14 comments

Repetitio Est Mater Studiorum

Instead of the same old paper again, how about a comparison of the ways current language implementations handle floating point numbers?

There is no meaningful difference, because it is dependent on how CPUs handle FP arithmetic. And only significant change in that from time when this paper was published is that IBM got their decimal FP formats as optional part into standard that everyone uses for FP arithmetics.

Technically, that's true, but the more interesting question is "How do languages handle non-integers?"

Quite a few languages (like Lisp) have a rational data type, such that 1/3 + 1/4 is done exactly. Others have a decimal data type, whose main purpose is for calculations involving money. 0.1 + 0.2 = 0.3

And finally, I'm no expert on Mathematica, but IIRC it computes error bounds on your results, and it only displays decimal digits if they are within those bounds. It's the proper way of doing things, and again, 0.1 + 0.2 = 0.3

Most current languages don't 'handle' floating point numbers themselves, they simply use the CPU instructions available to work with them.

The only way around that is to use a library that supports higher-precision floating point through emulation. Then agian, that still doesn't change the theoretical background in any way.

The GNU Multiple Precision Arithmetic Library


TL;DR summary in the spirit of times: when doing floating-point math, expect your results to float. There be dragons. Big ones.

Actually, my takeaway from this paper was exactly the opposite: if you understand how floats work, you can trust the results of floating-point calculation.

And for those without a math or CS degree: http://floating-point-gui.de/

Friends don't let friends use floating point numbers.

Computers are so darn fast today that there is no excuse for not using an arbitrary precision library. Floating point should be thought of as an optimization, not as a default.

Unless you are writing DSP code (in which case this article should be nothing new), don't use floating point!

Great. This isn't just useful for computer scientists, though. Also for other sciences that do computer simulations of any kind. And hopefully programmers will read it too and stop making awful mistakes such as using floating point values for money amounts.

I'm glad I work in a world where the performance overhead of decimal (both SQL and Ruby's BigDecimal) isn't too high.

I still remember when every Rails-based example with an invoice in it stored dollar amounts in the DB as floats. Ahh bless.

I remember seeing this article in the lecture notes of my Numerical Analysis class. Very cool read.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact