
Hardware cost evaluation of the posit number system - matt_d
https://hal.inria.fr/hal-02131982
======
mooman219
Facebook apparently achieved a performance gain with their designs
[https://code.fb.com/ai-research/floating-point-math/](https://code.fb.com/ai-
research/floating-point-math/)

~~~
ori_b
For a posit based float with 127 unique unisgned values. If you can tolerate
that little precision, you can implement arithmetic as a smallish lookup
table.

~~~
AstralStorm
Tensor cores seem like a fit...

------
benjaminogles
I have not read the paper yet but the abstract says that they compared two
hardware adders synthesized from C++ code. It seems like a fair comparison but
I wonder whether it might be possible to hand design a much more efficient
posit adder that is more competitive with hand designed floating point adders.
I can see how the number of lookup tables used in synthesis would be
proportional to the number of gates used on a dedicated chip though so maybe
there would be no difference in comparing hand optimized adders on ICs.

------
s_tec
Bummer. The posit number system
([http://www.johngustafson.net/pdfs/BeatingFloatingPoint.pdf](http://www.johngustafson.net/pdfs/BeatingFloatingPoint.pdf))
is super-cool, and its inventors often speak of its potential performance
advantages. On the other hand, their claims did seems a little "too good to be
true" \- variable-width fields require mux gates, and somebody needs to pay
for those.

------
klingonopera
Am I interpreting the delay results correctly? That's the time needed for the
operation add/multiply to complete?

Because there isn't much difference in that regard... values deviate no more
than 25%, which is not insignificant, but not overwhelming either.

What I find even more suprising: Is there almost no difference in latencies
for FP-operations between software-implemented and hardware-supported FPs?

