Hacker News new | past | comments | ask | show | jobs | submit login
Doing Symbolic Math with SymPy (lwn.net)
131 points by signa11 9 days ago | hide | past | favorite | 76 comments





I want to encourage people to think of sympy not just as a competitor to Mathematica but additionally as an incredibly valuable library that can be used _inside_ of other projects. Sometimes, you just want to compute an antiderivative, or you want user-supplied functions that you can manipulate easily, or you want to do some actual algebra.

Think of it less as a Mathematica replacement (like "Linux on the Desktop") and more as a crucial library enabling a lot of fun new creative things (like "embedded linux running on your toaster").

For example, in some of my computational chemistry work we use it to allow users to specify certain functionals, which we can then manipulate symbolically, do expression reduction and elimination on, and prove certain properties about. It's great!


This may be a stretch, but can you point to any examples or articles about using SymPy outside of a strictly mathematical context?

I would bet it can be useful in physics engines

I'd like to give an alternative perspective on SymPy. Whilst I'm not familiar with Mathematica, and therefore cannot draw parallels between the two, I've used SymPy for a number of years and found it to be really useful. I was initially introduced to it as part of a first year programming module in a Physics degree, and it was more than sufficient for that use-case; for that use-case, it absolutely wasn't worth paying for something like Mathematica. When combined with a graphical REPL (some may use Spyder, I personally use jupyter-qtconsole) or a Jupyter Notebook, it becomes really quite useful as it incorporates LaTeX rendering to make its outputs more readable and a little less daunting. Whilst the syntax may not be optimal for the mathematical way of working, for many problems like simple rearrangements, automatic simplification, checking equivalences, etc. it is almost always the first tool I reach for. The syntax is just cosmetic, and I use Python enough that the syntax doesn't even bother me, either. It does its job well, and it's helped me solve and verify solutions for a number of mathematical problems during my degree. Admittedly, I do run into the occasional problem here and there where there are obviously better tools available, but that's the same with any gratis vs paid solution. In most cases, SymPy does a sufficiently good job.

The trouble with SymPy is it's, well, buggy. I tried it a few years ago, and as soon as I got serious, I quite quickly ran into problems that I reported, some of which I now see they apparently still haven't gotten around to addressing. [1] [2]

Symbolic math is hard; they have my sympathies. I don't think I could do better. But as long as bugs like these exist, it's going to be hard to convince people to switch away from better tools like Mathematica.

[1] https://github.com/sympy/sympy/issues/12561

[2] https://github.com/sympy/sympy/issues/12562


Yikes those are bad. As a non-user interested in it, I must confess that seeing those issues (especially the 2nd one) makes me not want to use it!

I do get that it is a hard problem, but I would then recommend they not have an option for "real" if they can't get it right. Users will expect it to work (unless the docs point out it is unreliable).


> I would then recommend they not have an option for "real" if they can't get it right. Users will expect it to work (unless the docs point out it is unreliable).

You should be suspicious of using floats and expecting the solver to detect if a solution is real. Consider the polynomial $x^2 + C$ for $C \approx 0$. Are all of its roots real or complex? The problem is that even a very small change in C, maybe caused by rounding errors, can easily change a root from real to not real.

When solving polynomials, either use complex numbers throughout or insist on exact solutions and don't use floats.


> You should be suspicious of using floats and expecting the solver to detect if a solution is real. Consider the polynomial $x^2 + C$ for $C \approx 0$. Are all of its roots real or complex? The problem is that even a very small change in C, maybe caused by rounding errors, can easily change a root from real to not real.

C \approx 0?

The imaginary parts in the first bug are nowhere close to zero.


I was actually responding to the following claim:

> I would then recommend they not have an option for "real" if they can't get it right. Users will expect it to work (unless the docs point out it is unreliable).

It can't work reliably if the coefficients of the polynomial consist of floats. At least not in general. This point is a more fundamental one than your example.

Also, regarding your particular example: When solving cubics using the cubic formula, it is often the case that you will end up computing with complex numbers even if the result ends up real. If you introduce floats into this, it's impossible to guarantee that the imaginary parts will cancel out completely. Instead, they may cancel up to 10^-21 or something like that, which is precisely what happens in your example.


You're missing my point entirely. They don't need to solve this by keeping the variable real at every step or anything like that. What they need to do is to ensure the final solutions are real. Which means they need to solve it in the complex domain entirely and then filter out non-real solutions at the end, where they have the final magnitudes of the real and imaginary components. The only place that would run into trouble is where the imaginary components are small but nonzero, where you'd just expect it to use some tolerance threshold as the cutoff, but that obviously isn't the case in the first example (hence my comment).

There's no reason that the algorithm must fail on univariate real polynomials if it can't work equally well on arbitrary functions with arbitrary domains. It's trivial for a symbolic engine to recognize a univariate polynomial. And practically speaking it's the job of a computer algebra system to recognize different well-known classes of functions and treat them as well as it can. What I suspect likely happened here is they've been just so busy dealing with the more general problems (which are far more difficult) that they haven't gotten around to implementing many better methods for the special cases (like polynomials). Which I sympathize with, as I'd have virtually no clue how to solve some of the more general problems to begin with.


I don’t think there is a general-purpose symbolic math program that doesn’t have some bugs. Mathematica was getting some definite integrals wrong for some years. Anyone who uses these knows to check the results.

Yeah. For what it's worth (not sure if it makes you feel better or worse...) I've found similarly bad bugs in SciPy too.

Curious what those were.

Here's the one I can find (might not have reported/written others so I don't recall them): https://github.com/scipy/scipy/issues/7332

I can't tell if I'm just exceptionally good at making bugs explode in my face or something, but these are so simple that it boggles my mind when I'm the first person to report them...


That's not really a bug, it's just behavior explained by numerical analysis. A very small change in the initial guess can be the difference between success and a blowup for numerical algorithms; particularly rootfinding algorithms.

Sometimes this is implementation specific, and is numerical instability. Sometimes this is endemic to the algorithm regardless of implementation, like Runge's phenomenon. Often there is no fix, or the fix would be inordinately complicated or only generalize poorly, etc.

In this situation scipy should probably just pop a warning.


I know how numerical stuff works, but no, this is pretty inexcusable. See my other comment: https://news.ycombinator.com/item?id=25689916

You say you know how numerical stuff works, but you're demanding a solution that doesn't really exist. Numerical rootfinding algorithms are prone to this. The same thing happens in e.g. Mathematica, they just have better warning messages and documentation.

> The same thing happens in e.g. Mathematica, they just have better warning messages

They "just" produce "better" warning messages? You say that as if scipy is producing any warning messages? There's an absolutely colossal difference between failing with a poor error description and producing a completely wrong answer with 100% confidence.

> but you're demanding a solution that doesn't really exist

Doesn't exist?! Have you even tried the other algorithms on there before writing this? Broyden1, anderson, etc. handle this just fine. Even the default hybrid algorithm itself would handle this just fine if they even cared to run that very algorithm a couple more times to get convergence. Is there an algorithm that solves every equation in the world? No. Does it need to? No. Does it need to be able to handle the most basic cases? Yeah.


That one seems a bit more reasonable, though. When doing numerical computation (as opposed to symbolic), you should never assume the algorithm "just works". You need to know the quirks of the underlying method, etc. And I believe most root finding numerical algorithms will be sensitive to the initial guess.

Surprised they haven't added the warning and closed the bug.


Not at all. I've worked on numerical stuff myself; unlike the symbolic bugs, I have little sympathy for these particular numerical errors. It would be far more reasonable if it was convoluted composition of functions, or some kind of bizarre numerical edge case. However:

(a) Univariate quadratics are extremely well-understood and extremely important. For example, the convergences of optimization algorithms (this is root-finding, and there are some differences, but they're not unrelated, and this is beside my point) are often proven on quadratics first, and then they're applied to other functions.

(b) (x - 1)^2 - 1 is just about the simplest quadratic you can possibly imagine that isn't outright lacking in other terms (like x^2), and quadratics are pretty much the simplest nonlinear functions.

(c) No matter how good or bad your algorithm is, it is absolutely trivial to do a few quick tests afterward to sanity-check the result, and to return some kind of error if it looks wrong.

This isn't some kind of quibbling over a solver that has an error of 0.0001 or something. We're feeding in the most basic quadratic you can imagine, and the solver is telling you with confidence that the solution is 1.01 when in fact it's supposed to be 2. It's just inexcusable. At the very least (and even this would honestly still be woefully lacking), it should be able to do the dumbest thing possible, which is to plug that back in, and maybe plug in a couple close numbers (maybe +/- some epsilon) and see if results change sign or land anywhere near zero. In reality there are all kinds of tests they could and should be doing to check things like concavity and switch to better algorithms, but that's kind of moot when they're not doing the most basic check.

Edit: Actually I could probably say something similar about one of the SymPy bugs too (they really shouldn't have trouble telling if 0.66 + 0.56I is a real number), but one key difference is at least their problem is in filtering out correct solutions, not in returning completely wrong solutions with full confidence.


It's been a while since I did root finding in SciPy, but in principle I do not agree, and it doesn't seem like you're proposing an actual solution. You can make the case that that root finding algorithm shouldn't be the default (if it is). You can argue for a warning, as mentioned in the bug, but what else? I personally prefer less magic in the algorithms (e.g. detect if it is a quadratic and pick a suitable algorithm), and more information in the docs about the method being used so the user is aware.

Actually, I can see the argument both ways. To each his own. You're not fundamentally wrong, but you're also not fundamentally right either :-)

I personally should think their returned result should display the value of the function evaluated at their root so the programmer can check it against his own tolerance.


As I see it less magic in the algorithms" makes sense when the user specifies an algorithm. When they haven't, it literally means: "I really don't care how you solve this, just give me the best solution you can however you want, and stop making excuses."

And for the record I could propose better solutions but clearly you don't want them because that's too much "magic". In fact, one bit you might find fun: try plugging in the proposed solution as a new guess and see how well the algorithm had even converged. You're literally advocating for an algorithm that produces wrong solutions it doesn't even claim to converge on. It it even possible to be more wrong than this on such a basic problem?!

Lastly: I've found far milder bugs in Mathematica that I didn't expect them to fix, and they actually fixed them. So I think, moving forward, this is probably going to be my exhibit A for why expecting open source software to achieve the quality of commercial solutions might be fundamentally just expecting too much. When you don't have customer money to tell you your wrong solutions are wrong, people will jump to the defense of the most insane behavior, telling you you're "not fundamentally right" to expect programs to have even the most trivial sanity checks to prevent wildly wrong outputs for the most basic problems.


The root-finding problem is not solvable in general. In exact real arithmetic, there cannot exist an algorithm that will find an $x$ such that $f(x) = 0$ even if such an $x$ exists and $f$ is computable. At least not unless you make some further assumptions about $f$ and $x$.

Of course, I'm talking about the worst case. Your example is easier.


> The root-finding problem is not solvable in general.

Great, because nobody was asking for that either.

> Of course, I'm talking about the worst case. Your example is easier.

Which has been my entire point this whole time, which I already explained to you but which you conveniently prefer to totally ignore. The case I gave is not merely "easier"... it's utterly trivial. I literally even explained how they could solve it with the current algorithm too: by running multiple iterations of that exact algorithm to at least try to get some kind of convergence. Did I ever demand or expect it to solve arbitrary transcendentals? No, nobody was demanding it to magically solve everything. But producing flatly wrong outputs for even the simplest quadratics without any attempt to improve it, sanity check it, or issue a warning is just plain inexcusable and embarrassing.


Have you tried out SageMath? It seemed to work pretty well when I tried it out, but I never got serious really. My needs for symbolic math programming are pretty limited.

I haven't, though I did download it. I was trying to work on an algorithm rather than solve specific problem instances, so I ended up just working around it by using different problem instances that didn't result in these errors.

Worth noting that Julia's SymPy binding [1] is pretty pretty nice to work with too. If anyone's looking for big Julia project, I think a symbolic math package written fully in Julia would be a really exciting development. As far as I know, there isn't one yet. The better-known symbolic math packages for Julia still use bindings to C++ (SymEngine.jl [2]) or Python (SymPy.jl, Symata.jl [3]).

[1] - https://github.com/JuliaPy/SymPy.jl

[2] - https://github.com/symengine/SymEngine.jl

[3] - https://github.com/jlapeyre/Symata.jl


ModelingToolkit.jl is a symbolic math package written fully in Julia (with a bunch of extra symbolic-numerics features)

https://github.com/SciML/ModelingToolkit.jl

https://mtk.sciml.ai/dev/tutorials/symbolic_functions/

It's more like SymEngine in terms of completeness right now, though there's a good amount of simplification and equation solving built in. It's still growing, it's not at SymPy yet, but it's moving fast.


What is great about ModelingToolkit.jl is how its used in practical ways for other packages. E.g. NeuralPDE.jl.

Compared to SymPy, I feel that it is less of a "how do I integrate this function" package and more about "how can I build this DSL" framework.

https://github.com/SciML/NeuralPDE.jl


Andes, a Python package for power system simulation, uses SymPy now. In previous versions, computing the derivatives for dynamic models was done using complex equations written out manually. SymPy greatly simplifies the process of adding custom device models, which power companies often wish to do.

https://andes.readthedocs.io/


SymPy's problem is that it is based on an object-oriented language. This results in SymPy (and friends) demanding that users abandon hundreds of years of math notation, to conform to the whims of a programming language.

In SymPy you do M.diagonalize(), which makes no sense. The matrix M does not have a property diagonalize. Rather, you should apply a choosen diagonalization algorithm like Diagonalize(M) to produce a new matrix.

Mathematica wins, and will continue to win, because the language is functional and conforms to how Mathematicians think, rather than how (EDIT: certain) programmers like to code.


This is problem I have with Python in general; that even if you prefer to use it in a functional style, most libraries are written by real Python programmers, and using them will force you to grapple with the object-oriented inversion of common sense (https://lee-phillips.org/pythonhate/).

My favorite example is: ','.join(['a', 'b'])


Yeah, the Python language favors an imperative style due to the absence of features necessary for writing more functionally, like lambdas that can do more than return an expression, tail call optimization, and lexical scope. This last one, additionally, tends to make people use classes for essentially no other reason than that closures don't work natively.

> My favorite example is: ','.join(['a', 'b'])

I've always found the discrepancy between this example (instance method call), list(('a', 'b')) (technically a method call, but resembles a functional call), and str.lower('AB') (class method call) to be quite maddening.


Closures do work natively in Python though. Lexical scope does not prevent that. Most languages, including functional languages, use lexical scope these days.

What's wrong with (','.join(['a', 'b']))? If it's the order of arguments, then I want to point out that the order has two big advantages:

* partial application is actually useful: I've actually used (','.join) before, but (lambda sep: sep.join(['a', 'b'])) seems relatively useless. That's why Haskell also uses this order with (intercalate sep listOfStrings).

* it accepts any iterable, rather than just lists.


What strikes me as weird about this is that our intent is to do something to the array of strings, so we are looking for a function that takes this array and transforms it into something else. But we are obligated to invert our intuition here, and write this as if we are doing something to the punctuation that we want to insert between the array elements. It’s backwards, and causes us (me, anyway) to keep forgetting how to use this method. I must have gotten this wrong a dozen times. The sensible way would be something like join(array, separator). That is straightforward, and would be hard to get wrong. It could even be written in a way where the order of arguments didn’t matter, so you wouldn’t even have to remember that. Why is the separator privileged as the implicit first argument of `join`?

I accept that you’ve identified some advantages of the Python version; it’s always possible to find cases where object.method() notation gets you something.


> It could even be written in a way where the order of arguments didn’t matter

Well, 'hello'.join(', ') does not produce the same output as ', '.join('hello'), so the order of the arguments does matter.


OK, but I was thinking along the lines of how easy it is in (for example) Julia to make methods that dispatch on types, so that your function could examine the arguments, and if one of them happens to be a single character, and the other a string or array, do the sensible thing, regardless of the order in which you typed in the arguments.

Perhaps they want named arguments so that join(sep=', ', strings='hello') and join(strings='hello', sep=', ') are equivalent.

OOP interface is wrong for this functionality. It should be a free function. There is no reason it should be restricted to strings -- "join" makes sense on any sequence-like objects. We should be able to join(Iter1, Iter2[Iter3]) where Iter* are all arbitrary iterable data types.

I agree: you can definitely make it generic in both arguments. I was just arguing that this order of arguments makes more sense.

You tried Coconut? http://coconut-lang.org/

I had not heard of that; it’s an interesting project. But of course it doesn’t address the issue I was talking about, which is the OO nature of existing libraries.

May be the way mathematicians notate is frankly ambiguous and confusing, and recent decades of programming experience have given good insight into more expressive methods of notation.

I think this would be a more compelling thread if people discussed particulars of ambiguity in notation and how to resolve it.

Structure and Interpretation of Classical Mechanics [1] walks through ambiguity in physics and math. The first interesting note I found was footnote 2 in the preface [2], citing the ambiguity in the statement of the chain rule.

I am ignorant about some of the insight coming from recent decades of programming and would love to hear it.

[1]: https://mitpress.mit.edu/sites/default/files/titles/content/...

[2]: https://mitpress.mit.edu/sites/default/files/titles/content/...


> recent decades of programming experience

as opposed to recent millennia of mathematical experience?


Let's not pretend math notation has remained constant for millennia.

Exactly, it has been refined over millennia.

Right, and it's important to remember that mathematics is sharply optimized for human understanding, whereas programming languages strike a careful compromise between human understanding and efficient execution by machines. I don't think anyone could seriously argue that programming languages are "more expressive" than mathematical notation, which has the powers of ambiguity and hand-drawn graphics.

You are not wrong. Replace 'efficient execution by machines' with 'efficient processing by a compiler' and you will be even more correct.

Of course it has evolved. But some landmarks of it (e.g., using single letters for "variables") have been used since 2500 years ago. Now I laugh when programmers tell me to be "civilized" and name my matrices with full words.

To be fair, the mathematical notation that is familiar to us today is just a few hundred years old.

There's no reason you can't do this. Python has free functions. It was an intentional style choice to do it this way.

You can also make a diagonalize function that just calls the method if you like that style better, half of python's generic functions (e.g. len) just call methods anyways.


> Mathematica wins, and will continue to win, because the language is functional and conforms to how Mathematicians think, rather than how programmers like to code.

Or maybe Python "will win" because it conforms to how programmers think, rather than forcing them to think like mathematicians (also, why capitalize "mathematicians" in your sentence?), letting them get their shit done.

More importantly, "win" what exactly? It makes no sense to talk about "winning" if you don't even properly define a victory condition and context first.


It depends on who the intended users of SymPy are. If they are programmers, then conforming to how programmers would think would be useful. If it is targeted at mathematicians, then confirming to how programmers think would just be another unnecessary hurdle to using the library, and would hurt adoption (which is what "win" means here).

> conforms to how programmers think

Python is an important tool in general, it allows to automate many tasks that otherwise would have to be done manually. I wish more non-programmers knew how to use it.


Sympy is great for people who occasionally need such programs. Any serious mathematician will have his or her preferred software that he uses daily anyway.

Sympy works for people who know programming (not just programmers, but researchers in general) who need to use it occasionally.


SymPy is fantastic and impressive and I am a fan. It however does few things better than the program it is directly trying to compete with, namely Mathematica. This gives me scary flashbacks to the early 'Linux on the desktop' advocates, whose main argument was that it was all 'free' in every sense of the word. As we know now, that simply was not enough to switch the majority of users.

Do people see a space where SymPy can realistically flourish?


I think SymPy already flourishes in its space: it's a great tool for users who need some computer algebra functionality with the ability to glue that together with other code (thanks to Python). It's fine as long as you don't need high efficiency or specialized functionality.

Mathematica has no serious competition as a general-purpose symbolic system. I guess Maple is the closest runner-up, but it's more narrowly focused on computer algebra. Most people only need a fraction of Mathematica to get their work done, though.


Maxima plus Sage Math wrappers works more appropriately, I think. Though I would prefer a faster language than Python for such systems. Something like Sage but with Julia.

Nemo (http://nemocas.org/) and Oscar (https://oscar.computeralgebra.de/) are such projects. However, they are focused on algebra and have no support for expression manipulation, symbolic integration and the like.

For symbolic manipulation and all of that, ModelingToolkit.jl (https://mtk.sciml.ai/dev/) is the project, and it's getting SymEngine-like speeds in about a day or so. There's still more to do, but where this shines is the connection to numerical computing, i.e. partially solving models and then spitting out optimized parallelized code for the ODE solver for the other half of the model.

As SymPy is pure Python, if you need speed, you can run it under Pypy just-in-time compiled implementation of Python.

Maybe as a basis for something like an open source Photomath[1]. I noticed that kids use it a lot in my community. And it is not only to cheat at homeworks :)

[1] https://photomath.app/en/


I use it in Jupyter notebooks for civil and structural engineering calcs. The reason is simple: I can print expressions and manipulate the expression with a guarantee that my calcs conform to the pretty printed ones. This allows non-programmers (senior engineers) to check the work on the engineering side with some additional assurances that the calculation is going to be correct compared to just inputting the expression in python seperately from the pretty printing process.

It runs on a google colab tab that you can spin up pressing "ctrl-T" in any available computer.

Sure, but if you want fast easy math, it's hard to beat wolfram alpha, where you can just enter something like "solve x^2+5a=y for x" and get the answer right away.

It's really not. WA is good at getting natural language questions but gives you next to no control on the outputs.

Last time I used sympy I had a complicated function on many variables and wanted to plot certain specific 2D contours. I was distant from any computer with numerical/programming tools and needed to add them to a PPT deck last minute.


+100 this; SymPy is the best!

One of the coolest things about SymPy is the Live Shell, that let's you try basic calculations (without plotting) in the browser: https://live.sympy.org

There is even an option to send the entire session encoded as URL querystring which makes it tweetable, emailable, etc. e.g. https://live.sympy.org/?evaluate=factor(x**2%2B5*x%2B6)%0A%2... (to generate the session-as-querystring for your current live.sympy.org session, use the thumbtack button below the prompt)

I use this very often to send calculations and solutions—not only do you get the answer, but you see all the calculations.


Here is a short tutorial of the SymPy APIs that are relevant for high school math (equations, algebra, functions, etc.) and first-year university (mechanics/calculus/linear algebra): https://minireference.com/static/tutorials/sympy_tutorial.pd...

I did a fun thing with Sympy for Advent of Code 2017 December 3rd, here's a Jupyter notebook if anyone's interested: https://osdn.net/projects/joypy/scm/hg/Joypy/blobs/tip/docs/...

The puzzle involves a weird "spiral" RAM:

> You come across an experimental new kind of memory stored on an infinite two-dimensional grid.

> Each square on the grid is allocated in a spiral pattern starting at a location marked 1 and then counting up while spiraling outward.

The problem is to compute "the Manhattan Distance between the location of the data and square 1" for any given square.

While working on this I came up with the following equation:

    from sympy import floor, lambdify, solve, symbols

    k = symbols('k')

    E = 2 + 8 * k * (k + 1) / 2
Sympy figures out that this reduces to E = 4k(k + 1) + 2

I needed a function to solve for k given some n... Sympy can do that. Take a new symbol `n`, subtract it from the equation `E`, 0 = 4k(k + 1) + 2 - n and solve for `k`. There are two solutions because the equation is quadratic so it has two roots.

    n = symbols('n')

    g, f = solve(E - n, k)
    
In the context of the puzzle we only care about the larger root:

    (sqrt(n - 1) / 2 - 0.5) + 1
For reasons, I need to take the floor and add 1. Then Sympy can lambdify it and create a fast Python function to compute `k`, given `n`:

    F = lambdify(n, floor(f) + 1)

    for n in (9, 10, 25, 26, 49, 50):
        print(n, int(F(n)))

    9 1
    10 2
    25 2
    26 3
    49 3
    50 4
I'm sorry that it's hard to follow, I'm just excited to share how cool it was to let the computer do the symbolic math for me.


I’ve had similar problems as others posting here have. To be honest when I’ve had to do symbolic math at work (usually some combination of computational geometry and PDEs), I typically fall back to maxima or Mathematica (if my collaborators also have licenses). Those systems are quite solid and very mature when it comes to symbolic work, and I never saw any value in using SymPy since I can get to maxima from Python via Sage. I will admit though: I tend to not do that since I’m usually pretty happy in maxima or Mathematica directly and don’t see a reason to complicate things with a layer of Python. I taught a course a couple years back where I tried to use sympy since we were using a bunch of other Python tools (numpy, scipy, etc...), and the sympy part felt very awkward. I think I remember showing the students maxima and Mathematica to convince them that computers really could do complex symbolic work and not to get too turned off by the Python tools.

The article mentions qtconsole interface. Does anyone know how to install and run this interface? I could not find such information either on the linked page or the SymPy docs.

https://github.com/jupyter/qtconsole

> The Qtconsole is a very lightweight application that largely feels like a terminal, but provides a number of enhancements only possible in a GUI, such as inline figures, proper multiline editing with syntax highlighting, graphical calltips, and more.


My first mention of qtconsole in the article is a link to an official page that has detailed installation information.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: