
Beyond Floating Point – Next Generation Computer Arithmetic - merraksh
http://insidehpc.com/2017/02/john-gustafson-presents-beyond-floating-point-next-generation-computer-arithmetic/
======
CalChris
Gustafson gives a compelling argument, especially for low precision like 8b
and 16b. In 8b, operations can be implemented with table lookup with FPGAs.
Unums are better than IEEE but can't an application specific choice do better
given that we're talking about choosing 256 numbers?

------
doesnotexist
This was more interesting than I expected. Who knew that many architectures
have a flag that indicates that a float values is inexact, but that no
language exposes that to programmers. Or that IEEE 754 isn't really a standard
as much as it is a set of guidelines.

He makes a compelling argument for why his proposal of the ubit/posit
represents mathematical truthful statements, while floating point lies to you.
The tradeoffs make a lot of sense. No more overflow/underflow. Better closure
under arithmetical operations.

“Floating point numbers are like piles of sand; every time you move them
around, you lose a little sand and pick up a little dirt.” -- Brian Kernighan

------
slededit
What's really needed is an error term that can be computed in parallel with
your normal floating point operation. Floating point is impossible to write
correctly in practice if you don't consider the accumulated error.

~~~
erichocean
There are interval arithmetic libraries for C++ that do that, and are fast in
the common case where the error fits within floating point norms.

