Hacker News new | past | comments | ask | show | jobs | submit login

This sounds really good. I find floating points completely unusable for any situation where accuracy is important. It's fine when being vaguely in the right ballpark is good enough, but I don't want to have to deal with 2 + 4.1 = 6.1000000001, or x/1000000 + y/1000000 != (x+y)/1000000.

Well, those will not be solved by posits, it is still an approximate floating point format. What it does is redistribution the precision to what Gustafson considers a better default and dropping edge cases in order to get more bits for precision.

edit: One of several examples found in the "Posits: the good, the bad and the ugly" paper linked in the thread : 10.0 * 2.0 = 16.0 in posit8

Really? My impression from the article was that this was exactly one of the things posits were supposed to fix, at least for numbers with small exponents.

I'm not sure how 10.0 * 2.0 = 16.0. I'm not sure what posit8 means, but it can only be correct if it switches halfway from base 8 representation to base 10, which is a bit weird, but at least the calculation is correct. (Otherwise it would be so incorrect to be unusable for anything.)

I am serious (and recommend reading the paper I quoted to have a good understanding of the trade-of offered by posits).

Overall, posits are a new trade-of that will give you better precision (nothing exact, it is still an approximation) when you manage to keep all your number in a small range. Once you get out of that range precision drops significantly (whereas the precision of classical floating points drops gradually).

Posit8 are equivalent to 8 bits floating points (minifloats) making them an easy target for pathological cases but the example still illustrate the fact that, contrary to floating-point arithmetic, multiplication by a multiple of two is not exact with posits (one of several good properties we take for granted and would lose when switching to posits).

If it's just a variation of the same problems behind floats, then I'm not that interested. Well, I guess it depends on how small the range is.

Here's the paper they mentioned: https://hal.inria.fr/hal-01959581v3/document

The problem you describe here arises from using binary fractions, that is, a power of 2 as a denominator. You cannot represent the decimal fraction 0.1 as a binary fraction. You would have the same problem representing 1/3 with decimal fractions. It just does not work.

You can solve it by switching to decimal floating points, they are defined by IEEE as well:


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact