Hacker News new | past | comments | ask | show | jobs | submit login

If you're like me and struggled to understand the traditional textbook floating point formula explanation, Fabien Sanglard (of doom/wolf3d black book fame) wrote a page that finally made sense for me: http://fabiensanglard.net/floating_point_visually_explained/...



Can I ask an embarassing question to verify I'm not misunderstanding that page? Does this mean that floats are less precise the larger they are? You have as many bits to describe the "position of the number" between 0-1 as you do between 2048-4096


This is precisely what the term floating point means. The decimal point "floats" right or left depending on magnitude due to the mantissa-exponent representation. The opposite is fixed point where you have a fixed number of bits reserved for the whole and fractional parts. A 32-bit floating point can represent a vastly greater range of magnitudes than a 32-bit fixed point, with a vastly greater precision for values near zero, but at the expense of absolute precision at large magnitudes.


At the computer floating point was developed, people understood it implicitly, because it was just an automated version of their current tools: log tables and slide rules.

And likewise they knew better than to use floating-point where it doesn't work.


There are a couple of good images on this page which make clear the fact that floating point position 'degrades' in discrete steps as the number gets larger. It's not quite the same as counting in logarithmic terms, which is shown for comparison.

https://www.americanscientist.org/article/the-higher-arithme...


They are less precise in the absolute sense (the absolute error is larger) but equally so in the relative sense (the relative error is the same).

Well, essentially, of course.


Yes, and some larger numbers are not representable at all due to gaps. E.g 16777217 is not representable in single precision IEEE 754. There are handy tables here for the range and precision values: https://en.wikipedia.org/wiki/IEEE_754-1985


Yeah and thats why you often have to recale the problem before applying numerical methods


> You have as many bits to describe the "position of the number" between 0-1 as you do between 2048-4096

You mean 0.5-1

0-1 is about a quarter of the numbers, just as many as 1-inf


Your understanding is correct.


The author's presentation of the traditional explanation makes sense only if M is visualized as binary, while E remains decimal. But the reader coming to the subject for the first time is in no position to guess this, and would need a clue like subscript notation: B_2, E_10.

Traditional explanation done well:

https://www.cs.cornell.edu/~tomf/notes/cps104/floating.html


A one pager explanation I wrote after it finally clicked for me -- floating point is just binary scientific notation! Not sure why more places don't explain it like this. https://github.com/dasl-/floating-point-numbers#floating-poi...


With some important algebraic things, like -0 and 0 are distinct numbers that are equal to each other, and 1/0 is infinity (depending on exception mode), but 1/-0 is -infinity, and all of the different kinds of NaNs, etc.


Plugging my own explanation because HN seemed to like it:

https://news.ycombinator.com/item?id=15360485




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: