
Mpmath – Python library for arbitrary-precision floating-point arithmetic - subnaught
http://mpmath.org/
======
adrianN
This looks like a pure Python implementation. I wonder how the performance is
like?

~~~
fdej
Author of mpmath here. It depends a lot on what you're doing.

A single floating-point arithmetic operation in mpmath at low precision
involves something like 100 "Python cycles" (bytecode ops), each of which
takes perhaps 100 machine cycles.

That makes it, roughly:

100 times slower than machine arithmetic in Python.

10000 times slower than machine arithmetic in C (or NumPy if you can vectorize
your code fully).

100 times slower than arbitrary precision floating-point arithmetic
implemented in C (~100 machine cycles).

These are obviously just order of magnitude estimates.

However, low precision is the worst case, relatively speaking. mpmath uses
Python longs internally, and it can also use GMPY when available. At
sufficiently high precision, the time is dominated the Python/GMP kernel for
multiplying integers and performance is close to other bignum implementations.

Also, for computing transcendental functions, mpmath uses fixed-point
arithmetic internally, which reduces overhead a lot.

The biggest problem with mpmath is that it doesn't implement algorithms that
scale optimally for all operations, and a lot of the error analysis is
completely nonrigorous.

Since 2012, I have been developing a C library ([https://github.com/fredrik-
johansson/arb/](https://github.com/fredrik-johansson/arb/)) that solves many
of the shortcomings of mpmath. It is obviously much faster at low precision
(the factor 100 mentioned above), it generally uses much better algorithms,
and it tracks error bounds automatically using interval arithmetic.

~~~
pklausler
Why not use the "double-double" representation, in which one extends a regular
double precision value with another double that estimates its error (and then
maybe a third that estimates _its_ error, and so on...)? Double-double
operations are fast in terms of floating-point operations, conversions to
other floating types are free, and the precision (107.5 bits asymptotically)
is nearly as good as 128-bit (112 bits).

~~~
fdej
Sure, but:

1) As soon as you wrap it in a Python class, much of that speed advantage goes
away.

2) Double-double doesn't solve the problem with the limited exponent range of
doubles.

3) When the goal is to allow setting the precision to any number of bits you
like, having a separate implementation for X bits complicates things a lot,
especially when the X-bit arithmetic doesn't quite behave like Y-bit
arithmetic for X != Y (double-double behaves somewhat differently from 108-bit
floating-point).

IMO, double-double would be most interesting as a NumPy extension.

~~~
pklausler
Fair enough, although I've always (well, since 1980) been skeptical that an
exponent range from 10E-307 to 10E+308 fails to suffice for any real problem.
I mean, come on, there's only something like 10E+80 atoms in the universe.

~~~
fdej
Very large or small numbers are useful in combinatorics and number theory. It
can also happen that they are needed for intermediate steps in an algorithm,
even when the inputs and outputs are moderate (the same reason that 15 digits
sometimes isn't enough even when you just want 3 digits of output). Granted,
you can generally work around exponent range limitations using tricks (scaling
values properly or introducing logarithms), but that can obviously be a
hassle.

