

The Easy Way To Solve Equations In Python - barakstout
http://thelivingpearl.com/2013/01/15/the-easy-way-to-solve-equations-in-python/

======
dmlorenzetti
I hate to bash on this article, but the number of commenters calling it "nice"
makes me want to warn that the implementations are not really very good.

Except for the bisection method, all of these implementations take an argument
specifying the number of iterations to run. In most cases, the only way to
terminate in fewer iterations is by hitting an "exact" root, i.e., calculating
the residual as exactly zero. This is poor practice for a number of reasons.
First, in practice it's pretty rare for a method to find an exact zero.
Second, once a method has converged to the numerical precision of the machine,
making more iterations just wastes flops. So a much better approach is to
specify a solution tolerance (as shown with the bisection method). Even better
is to provide absolute and relative tolerances, and to choose those values
based on either the domain requirements, or on the machine characteristics.
Dennis & Schnabel's excellent "Numerical Methods for Unconstrained
Optimization and Nonlinear Equations" has a good discussion on choosing
convergence tolerances.

This dependence on iteration counts to terminate, by the way, is probably why
the author equates low iteration counts with greater accuracy. But in fact
these methods don't vary in their intrinsic accuracy, rather, they vary in
their order of convergence.

Another example of poor practice is in the bisection method implementation.
One generally should not bisect an interval using c = (a+b)/2, because the
nature of finite-precision arithmetic means there is no guarantee that c will
lie between a and b, even if the machine can represent numbers between a and
b. A better approach is to ensure a < b, then to set c = a + (b-a)/2. This
expression is much less subject to roundoff errors.

~~~
barakstout
All good valid points. The article gives a high overview but lacks on the
detail. All these points are valid and some mentioned in the article. The
author mentioned a few times that the for-loops should be replaced with while-
loops based on convergence tolerance. As for the bisection interval, I am not
sure the basic c=(a+b)/2 is not enough. Can you provide an example where it
fails?

~~~
simcop2387
I suspect when your near an overflow or underflow situation. If a+b > MAX_INT
or MAX_FLOAT then it could end up lower than a

~~~
barakstout
Wouldn't you consider that a very extreme situation? Also, I am not aware of a
MAX_INT in python. As far as I know it is the limits of your memory. If you
are trying to work with such big numbers you should really use something a
little more than Python and Bisection.

------
nnnnnnnninnnnnn
See <http://sympy.org/en/index.html>. I've been using this for a while.

~~~
jre
I've been using sympy to automate some derivative computation and I loved it.
I found the evalf[1] function to be really helpful as it helps bridge the gap
between the symbolic and numeric world.

[1] <http://docs.sympy.org/dev/modules/evalf.html>

~~~
barakstout
Good point. I think you could use the standard python eval() instead of
defining f. something like:

    
    
        f = 'x**3+x-1'
        x = 4
        eval(f)
    

It should work fine.

------
inetsee
I was fairly impressed by this article until I noticed that the author was
using Python 2.6. Isn't Python 2.7 the version usually used by those who
aren't ready to make the leap to Python 3?

Then I noticed that there was no mention at all of the various ways of using
the R language with Python (RPy, etc.).

The easy way to solve equations in Python (or any other language for that
matter) would be to take full advantage of the work that other bright people
have done.

~~~
anonymouz
I think the article would better be described as "Basic methods for
numerically solving equations". It gives a nice introduction to a number of
those methods, and is really not Python specific as the code can easily be
ported to any other language. I would view it as a way to learn the methods,
rather than how to actually solve an equation in a program.

~~~
barakstout
If it was more focused on math I would agree. However, the focus is on the
solution rather the technique. It is a nice refresher to those who have seen
the information before.

------
NamTaf
I wrote this a while ago during uni to help cement the various numerical
analysis processes covered in my classes in my head. It's not formal or well-
written or covers all of the detail perfectly because it was largely for my
own use. However, some of you may find it interesting I guess:

<http://www.overclockers.com.au/wiki/Numerical_Analysis>

I offer no Good Science or Good Writing warranty on it ;)

