Hacker News new | past | comments | ask | show | jobs | submit login

This is precisely what the term floating point means. The decimal point "floats" right or left depending on magnitude due to the mantissa-exponent representation. The opposite is fixed point where you have a fixed number of bits reserved for the whole and fractional parts. A 32-bit floating point can represent a vastly greater range of magnitudes than a 32-bit fixed point, with a vastly greater precision for values near zero, but at the expense of absolute precision at large magnitudes.



At the computer floating point was developed, people understood it implicitly, because it was just an automated version of their current tools: log tables and slide rules.

And likewise they knew better than to use floating-point where it doesn't work.


There are a couple of good images on this page which make clear the fact that floating point position 'degrades' in discrete steps as the number gets larger. It's not quite the same as counting in logarithmic terms, which is shown for comparison.

https://www.americanscientist.org/article/the-higher-arithme...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: