Hacker News new | past | comments | ask | show | jobs | submit login

Well, those will not be solved by posits, it is still an approximate floating point format. What it does is redistribution the precision to what Gustafson considers a better default and dropping edge cases in order to get more bits for precision.

edit: One of several examples found in the "Posits: the good, the bad and the ugly" paper linked in the thread : 10.0 * 2.0 = 16.0 in posit8




Really? My impression from the article was that this was exactly one of the things posits were supposed to fix, at least for numbers with small exponents.

I'm not sure how 10.0 * 2.0 = 16.0. I'm not sure what posit8 means, but it can only be correct if it switches halfway from base 8 representation to base 10, which is a bit weird, but at least the calculation is correct. (Otherwise it would be so incorrect to be unusable for anything.)


I am serious (and recommend reading the paper I quoted to have a good understanding of the trade-of offered by posits).

Overall, posits are a new trade-of that will give you better precision (nothing exact, it is still an approximation) when you manage to keep all your number in a small range. Once you get out of that range precision drops significantly (whereas the precision of classical floating points drops gradually).

Posit8 are equivalent to 8 bits floating points (minifloats) making them an easy target for pathological cases but the example still illustrate the fact that, contrary to floating-point arithmetic, multiplication by a multiple of two is not exact with posits (one of several good properties we take for granted and would lose when switching to posits).


If it's just a variation of the same problems behind floats, then I'm not that interested. Well, I guess it depends on how small the range is.


Here's the paper they mentioned: https://hal.inria.fr/hal-01959581v3/document




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: