Hacker News new | past | comments | ask | show | jobs | submit login

Is it true that these are just a curiosity (albeit a compelling one) and not practical until hardware support for them comes along, because until then they will always be slower than floating-point on supporting hardware?



You could say that about anything at some point, not least RISC-V.

However, there _are_ hardware implementations of posit available, but given the inertial of existing code and data that assumes IEEE-754, posits are more like to see adoption in specialized areas where the higher information density is enough of a win. Or in green field application without concerns for legacy.


I think one all-around-good use case/feature is the fact that multiplying a very large number by a very small number has significantly less error in posits vs. IEEE754.


I don't think this is true. In floats, multiplying a big number by a small number has the same accuracy of any other non-overflowing/underflowing multiplication. In posits, the multiplication will be exact, but only because the big and small numbers were less accurate in the first place. If you have something like f(x)*g(y) where f(x) is really big and g(y) is really small, floats will give more accuracy.


I'm going off the chart shown in Figure 15 of the cited Cornell post about it, which seems to disagree with your assertion


That's exactly what I was saying. The result is exact, but it has lower accuracy.


ML training is bottlenecked on arithmetic performance. Computational fluid dynamics is bottlenecked on arithmetic performance. There are others, but those are the two obvious ones.

Nobody in any bottlenecked field is sufficiently motivated to generate a chip that is potentially worth lots of millions of dollars? Really? That's a pretty strong statement.

And, if posits can't break those bottlenecks, are they really better?

I would cite 3D graphics, but I suspect that memory bandwidth is more the problem as it seems that people are willing to double or quadruple the computational workload to avoid a memory stall or roundtrip. And I don't have enough background to discuss whether HFT is bottlenecked on arithmetic performance.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: