Hacker News new | past | comments | ask | show | jobs | submit login

Hmm. Their results seems to good to be true without using quire. Look:

The two largest/smallest exponent posit16 can represent are:

min exp: 2^{ (+14<<es) or 0x01 } * (1+0) = 2^(-28) max exp: 2^{ (-14<<es) or 0x00 } * (1+0) = 2^(+29)

Note those numbers only have the implicit bit as significant. They don't have any space left to encode any other information other than the sign and regime.

While the double (binary64) can represent a much larger range of exponent:

min exp: 2^{ (-1023-2047-1) } * (1+0) = 2^(-1022) max exp: 2^{ (-1023+2047-1) } * (1+0) = 2^(1023)

Also, all doubles have 52 bits of precision while the posit have at most 16-1-2-1=12 maximum bits of precision.

Their significant are encoded slightly differently though. I'm not sure if this would be enough to achieve such different result without quire.

The scaling he mention could be done very easily introducing a parameter bias on FPGA implementation. I might add that to my work.

I pinged Klöwer on twitter.

Tweet exchange + link, for anyone else stumbling upon this later:

> @milankloewer Would mind joining this thread on HN: https://news.ycombinator.com/item?id=20392612 We are discussing the results of your work with posit.

> Happy to join. In short: I did not use any quires so far. All simulations are entirely based on 16bit posits, but I compare them to Float16 and not Float64. Tricks are: Scaling and rewriting algorithms to avoid very large and very small numbers, that's it.


You clearly know more about this topic than me, thanks for taking the time to explain your viewpoints.

> I pinged Klöwer on twitter.

Probably the most sensible, easiest way to clear this up, haha :)

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact