
Improving Supercomputing Accuracy by Sacrificing Precision - jonbaer
https://www.top500.org/news/improving-supercomputing-accuracy-by-sacrificing-precision/
======
pklo
A large portion of engineering calculations overall requires something like
three significant digits of precision; the problem is that these digits need
to be accurate, i.e. we need a good error estimate, consistent with that
precision. Most numerical algorithms do not provide such estimates, and so
we've fallen into a cargo cult of excess precision, hoping that it will save
us from the accuracy loss in the calculation. It often works, but occasionally
fails, and we have no idea which one happened.

I have a simple example of that: a polynomial rp(x,y):=1335/4 _y^6+x^2_ (11
_x^2_ y^2-y^6+(-121) _y^4-2)+11 /2_y^8+x/(2*y) that evaluates to about 1.1726
in both single and double precision, but whose real value is -0.827....

So, the bottom line is that I'd gladly trade off precision for better accuracy
guarantees---but it turns out that's surprisingly hard in general case.

~~~
pklo
Sorry--forgot to specify the values for the polynomial: 77617 and 33096. In
Maxima, the effects of precision can be simulated thus:

    
    
      for fpprec:10 step 5 thru 50 do print (rp(77617.0b0, 33096.0b0));

1.17260394b0 1.17260394005318b0 1.8014398509481985173b16
1.172603940053178631858835b0 1.17260394005317863185883490452b0
1.1726039400531786318588349045201837b0
-8.27396059946821368141165095479816291999b-1
-8.27396059946821368141165095479816291999033116b-1
-8.2739605994682136814116509547981629199903311578439b-1

google will find you more info about Rump's polynomial..

~~~
GFK_of_xmaspast
Numbers of that magnitude combined with sixth and eighth powers should get you
thinking 'hey, something's fishy here'.

Also, here's a good reference:
[http://epubs.siam.org/doi/book/10.1137/1.9780898718027](http://epubs.siam.org/doi/book/10.1137/1.9780898718027)

And I'm too headachey to bother trying, but I wonder what happens if you
ignore that x/2y part and put the rest of the thing into Horner's form.

------
wohlergehen
TLDR: When using an iterative solver, do the first few calculations in reduced
precision, and only then switch to double -- It is more energy efficient.

------
daveguy
I always thought it would be cool to have a top500 list that does not restrict
the algorithm, just accuracy and precision of fft. It would be interesting to
see how algorithms would get us from 10 petaflops to "1 exaflop equivalent"
\-- the flops equivalent would be time speedup to get the same answer in less
time. As it is they have very strict algorithm requirements (which makes sense
if you are regularizing on flops). Just curious, but of course execution time
on the top supercomputers isn't cheap.

~~~
privong
What about doing something like the inverse? Top500 for algorithms on a fixed
hardware configuration. You have an input dataset and an answer (perhaps
something with an analytic solution, for sake of comparison). That would let
you compare the algorithm more directly.

------
GFK_of_xmaspast
The idea of changing the method as you advance in the problem is not a new
one, here's a recent article in SIAM Review
[http://epubs.siam.org/doi/abs/10.1137/130936725](http://epubs.siam.org/doi/abs/10.1137/130936725)

~~~
carterschonwald
Googling yields a not pay walled one Paper title is Composing Scalable
Nonlinear Algebraic Solvers

