
The rise of mixed precision arithmetic - johndcook
https://nickhigham.wordpress.com/2015/10/20/the-rise-of-mixed-precision-arithmetic/
======
natosaichek
Rather than just adding more bits - doubles and double-doubles, I like the
idea of moving around the boundary between the exponent and mantissa as
necessary for the application, as well as adding the ability to determine how
uncertain the number actually is.

[http://sites.ieee.org/scv-cs/files/2013/03/Right-
SizingPreci...](http://sites.ieee.org/scv-cs/files/2013/03/Right-
SizingPrecision1.pdf)

www.amazon.com/The-End-Error-Computing-Computational/dp/1482239868

------
atemerev
Still, nearly no progress in fast decimals, which are extremely important in
financial applications.

I'd even say that the only place where floating point is necessary is in
simulations (physics, 3D, analog signals — all of it should be properly done
with GPUs.) Everything else (2D layouts, finance, data processing) is better
served with either rationals or decimals.

We should remove floating point support from general-purpose CPUs and leave it
to GPUs, where it belongs.

~~~
kazinator
_Decimals_ are extremely important in financial applications.

 _Fast_ decimals may range from unimportant to extremely important in specific
situations, according to what is considered fast enough in those situations.

> _We should remove floating point support from general-purpose CPUs and leave
> it to GPUs, where it belongs._

That's just silly. Let's move an extremely common and deeply integrated
programming language feature onto special hardware.

You do realize that "computer science" once actually consisted of little more
than numerical analysis? Now you want to kick the latter off the computer. :)

~~~
SixSigma
My first PC was a 386 with a 387 floating point co-processor. [1]

Intel chips have only had floating point support as default since 1989 with
the 486DX [2] and still had a cheaper non-floating point version in the 486SX
[3]

Motorola 68000 was similarly constrained

[1] [https://en.wikipedia.org/wiki/X87](https://en.wikipedia.org/wiki/X87)

[2] [https://en.wikipedia.org/wiki/486DX](https://en.wikipedia.org/wiki/486DX)

[3] [https://en.wikipedia.org/wiki/486SX](https://en.wikipedia.org/wiki/486SX)

~~~
dalke
Which would suggest that on-chip floating point integration is important
enough, across a wide range enough range of applications, to justify the
expense.

~~~
atemerev
That was before GPUs. GPU is a modern co-processor.

~~~
dragonwriter
> That was before GPUs. GPU is a modern co-processor.

And the trend of on-die GPUs as part of the CPU -- the same thing that
happened with FPUs before them -- has already begun. _Again_ showing that on-
chip floating point integration (even of the kind of floating-point operations
supported by GPUs as distinct from the functions already inherent in CPUs that
have integrated FPUs) is important enough, across a wide range enough range of
applications, to justify the expense.

------
dzdt
Is most of what you do numerical calculations? How much complexity are you
willing to spend (in terms of development and maintanance programmer time) to
buy a factor of 2 speedup to your code execution time in numerically heavy
routines? Have you already optimised the hell out of those routines? Unless
the answers are "yes", "a lot", and "yes", mixed precision arithmetic is not
the answer for you.

------
vorg
> Arbitrary precision floating point arithmetic is available through [...] the
> core data type BigFloat in the new language Julia

Go 1.5 added a big.Float type.

