Think of it less as a Mathematica replacement (like "Linux on the Desktop") and more as a crucial library enabling a lot of fun new creative things (like "embedded linux running on your toaster").
For example, in some of my computational chemistry work we use it to allow users to specify certain functionals, which we can then manipulate symbolically, do expression reduction and elimination on, and prove certain properties about. It's great!
Symbolic math is hard; they have my sympathies. I don't think I could do better. But as long as bugs like these exist, it's going to be hard to convince people to switch away from better tools like Mathematica.
I do get that it is a hard problem, but I would then recommend they not have an option for "real" if they can't get it right. Users will expect it to work (unless the docs point out it is unreliable).
You should be suspicious of using floats and expecting the solver to detect if a solution is real. Consider the polynomial $x^2 + C$ for $C \approx 0$. Are all of its roots real or complex? The problem is that even a very small change in C, maybe caused by rounding errors, can easily change a root from real to not real.
When solving polynomials, either use complex numbers throughout or insist on exact solutions and don't use floats.
C \approx 0?
The imaginary parts in the first bug are nowhere close to zero.
> I would then recommend they not have an option for "real" if they can't get it right. Users will expect it to work (unless the docs point out it is unreliable).
It can't work reliably if the coefficients of the polynomial consist of floats. At least not in general. This point is a more fundamental one than your example.
Also, regarding your particular example: When solving cubics using the cubic formula, it is often the case that you will end up computing with complex numbers even if the result ends up real. If you introduce floats into this, it's impossible to guarantee that the imaginary parts will cancel out completely. Instead, they may cancel up to 10^-21 or something like that, which is precisely what happens in your example.
There's no reason that the algorithm must fail on univariate real polynomials if it can't work equally well on arbitrary functions with arbitrary domains. It's trivial for a symbolic engine to recognize a univariate polynomial. And practically speaking it's the job of a computer algebra system to recognize different well-known classes of functions and treat them as well as it can. What I suspect likely happened here is they've been just so busy dealing with the more general problems (which are far more difficult) that they haven't gotten around to implementing many better methods for the special cases (like polynomials). Which I sympathize with, as I'd have virtually no clue how to solve some of the more general problems to begin with.
I can't tell if I'm just exceptionally good at making bugs explode in my face or something, but these are so simple that it boggles my mind when I'm the first person to report them...
Sometimes this is implementation specific, and is numerical instability. Sometimes this is endemic to the algorithm regardless of implementation, like Runge's phenomenon. Often there is no fix, or the fix would be inordinately complicated or only generalize poorly, etc.
In this situation scipy should probably just pop a warning.
They "just" produce "better" warning messages? You say that as if scipy is producing any warning messages? There's an absolutely colossal difference between failing with a poor error description and producing a completely wrong answer with 100% confidence.
> but you're demanding a solution that doesn't really exist
Doesn't exist?! Have you even tried the other algorithms on there before writing this? Broyden1, anderson, etc. handle this just fine. Even the default hybrid algorithm itself would handle this just fine if they even cared to run that very algorithm a couple more times to get convergence. Is there an algorithm that solves every equation in the world? No. Does it need to? No. Does it need to be able to handle the most basic cases? Yeah.
Surprised they haven't added the warning and closed the bug.
(a) Univariate quadratics are extremely well-understood and extremely important. For example, the convergences of optimization algorithms (this is root-finding, and there are some differences, but they're not unrelated, and this is beside my point) are often proven on quadratics first, and then they're applied to other functions.
(b) (x - 1)^2 - 1 is just about the simplest quadratic you can possibly imagine that isn't outright lacking in other terms (like x^2), and quadratics are pretty much the simplest nonlinear functions.
(c) No matter how good or bad your algorithm is, it is absolutely trivial to do a few quick tests afterward to sanity-check the result, and to return some kind of error if it looks wrong.
This isn't some kind of quibbling over a solver that has an error of 0.0001 or something. We're feeding in the most basic quadratic you can imagine, and the solver is telling you with confidence that the solution is 1.01 when in fact it's supposed to be 2. It's just inexcusable. At the very least (and even this would honestly still be woefully lacking), it should be able to do the dumbest thing possible, which is to plug that back in, and maybe plug in a couple close numbers (maybe +/- some epsilon) and see if results change sign or land anywhere near zero. In reality there are all kinds of tests they could and should be doing to check things like concavity and switch to better algorithms, but that's kind of moot when they're not doing the most basic check.
Edit: Actually I could probably say something similar about one of the SymPy bugs too (they really shouldn't have trouble telling if 0.66 + 0.56I is a real number), but one key difference is at least their problem is in filtering out correct solutions, not in returning completely wrong solutions with full confidence.
Actually, I can see the argument both ways. To each his own. You're not fundamentally wrong, but you're also not fundamentally right either :-)
I personally should think their returned result should display the value of the function evaluated at their root so the programmer can check it against his own tolerance.
And for the record I could propose better solutions but clearly you don't want them because that's too much "magic".
In fact, one bit you might find fun: try plugging in the proposed solution as a new guess and see how well the algorithm had even converged. You're literally advocating for an algorithm that produces wrong solutions it doesn't even claim to converge on. It it even possible to be more wrong than this on such a basic problem?!
Lastly: I've found far milder bugs in Mathematica that I didn't expect them to fix, and they actually fixed them. So I think, moving forward, this is probably going to be my exhibit A for why expecting open source software to achieve the quality of commercial solutions might be fundamentally just expecting too much. When you don't have customer money to tell you your wrong solutions are wrong, people will jump to the defense of the most insane behavior, telling you you're "not fundamentally right" to expect programs to have even the most trivial sanity checks to prevent wildly wrong outputs for the most basic problems.
Of course, I'm talking about the worst case. Your example is easier.
Great, because nobody was asking for that either.
> Of course, I'm talking about the worst case. Your example is easier.
Which has been my entire point this whole time, which I already explained to you but which you conveniently prefer to totally ignore. The case I gave is not merely "easier"... it's utterly trivial. I literally even explained how they could solve it with the current algorithm too: by running multiple iterations of that exact algorithm to at least try to get some kind of convergence. Did I ever demand or expect it to solve arbitrary transcendentals? No, nobody was demanding it to magically solve everything. But producing flatly wrong outputs for even the simplest quadratics without any attempt to improve it, sanity check it, or issue a warning is just plain inexcusable and embarrassing.
 - https://github.com/JuliaPy/SymPy.jl
 - https://github.com/symengine/SymEngine.jl
 - https://github.com/jlapeyre/Symata.jl
It's more like SymEngine in terms of completeness right now, though there's a good amount of simplification and equation solving built in. It's still growing, it's not at SymPy yet, but it's moving fast.
Compared to SymPy, I feel that it is less of a "how do I integrate this function" package and more about "how can I build this DSL" framework.
In SymPy you do M.diagonalize(), which makes no sense. The matrix M does not have a property diagonalize. Rather, you should apply a choosen diagonalization algorithm like Diagonalize(M) to produce a new matrix.
Mathematica wins, and will continue to win, because the language is functional and conforms to how Mathematicians think, rather than how (EDIT: certain) programmers like to code.
My favorite example is: ','.join(['a', 'b'])
> My favorite example is: ','.join(['a', 'b'])
I've always found the discrepancy between this example (instance method call), list(('a', 'b')) (technically a method call, but resembles a functional call), and str.lower('AB') (class method call) to be quite maddening.
* partial application is actually useful: I've actually used (','.join) before, but (lambda sep: sep.join(['a', 'b'])) seems relatively useless. That's why Haskell also uses this order with (intercalate sep listOfStrings).
* it accepts any iterable, rather than just lists.
I accept that you’ve identified some advantages of the Python version; it’s always possible to find cases where object.method() notation gets you something.
Well, 'hello'.join(', ') does not produce the same output as ', '.join('hello'), so the order of the arguments does matter.
Structure and Interpretation of Classical Mechanics  walks through ambiguity in physics and math. The first interesting note I found was footnote 2 in the preface , citing the ambiguity in the statement of the chain rule.
I am ignorant about some of the insight coming from recent decades of programming and would love to hear it.
as opposed to recent millennia of mathematical experience?
You can also make a diagonalize function that just calls the method if you like that style better, half of python's generic functions (e.g. len) just call methods anyways.
Or maybe Python "will win" because it conforms to how programmers think, rather than forcing them to think like mathematicians (also, why capitalize "mathematicians" in your sentence?), letting them get their shit done.
More importantly, "win" what exactly? It makes no sense to talk about "winning" if you don't even properly define a victory condition and context first.
Python is an important tool in general, it allows to automate many tasks that otherwise would have to be done manually. I wish more non-programmers knew how to use it.
Sympy works for people who know programming (not just programmers, but researchers in general) who need to use it occasionally.
Do people see a space where SymPy can realistically flourish?
Mathematica has no serious competition as a general-purpose symbolic system. I guess Maple is the closest runner-up, but it's more narrowly focused on computer algebra. Most people only need a fraction of Mathematica to get their work done, though.
Last time I used sympy I had a complicated function on many variables and wanted to plot certain specific 2D contours. I was distant from any computer with numerical/programming tools and needed to add them to a PPT deck last minute.
One of the coolest things about SymPy is the Live Shell, that let's you try basic calculations (without plotting) in the browser: https://live.sympy.org
There is even an option to send the entire session encoded as URL querystring which makes it tweetable, emailable, etc.
e.g. https://live.sympy.org/?evaluate=factor(x**2%2B5*x%2B6)%0A%2... (to generate the session-as-querystring for your current live.sympy.org session, use the thumbtack button below the prompt)
I use this very often to send calculations and solutions—not only do you get the answer, but you see all the calculations.
The puzzle involves a weird "spiral" RAM:
> You come across an experimental new kind of memory stored on an
infinite two-dimensional grid.
> Each square on the grid is allocated in a spiral pattern starting at a
location marked 1 and then counting up while spiraling outward.
The problem is to compute "the Manhattan Distance between the location of
the data and square 1" for any given square.
While working on this I came up with the following equation:
from sympy import floor, lambdify, solve, symbols
k = symbols('k')
E = 2 + 8 * k * (k + 1) / 2
I needed a function to solve for k given some n... Sympy can do that. Take a new symbol `n`, subtract it from the equation `E`, 0 = 4k(k + 1) + 2 - n and solve for `k`. There are two solutions because the equation is quadratic so it has two roots.
n = symbols('n')
g, f = solve(E - n, k)
(sqrt(n - 1) / 2 - 0.5) + 1
F = lambdify(n, floor(f) + 1)
for n in (9, 10, 25, 26, 49, 50):
a few weeks ago https://news.ycombinator.com/item?id=25254648
Lots in 2014:
> The Qtconsole is a very lightweight application that largely feels like a terminal, but provides a number of enhancements only possible in a GUI, such as inline figures, proper multiline editing with syntax highlighting, graphical calltips, and more.