Hacker News new | comments | show | ask | jobs | submit login
A Comparison of Differential Equation Solver Suites (stochasticlifestyle.com)
150 points by one-more-minute 5 months ago | hide | past | web | favorite | 39 comments

Chris is an encyclopedia on ODEs. Take a look at this. No other system has this thorough a study of so many methods in one place.


This is the kind of thing we imagined experts being able to do when we designed Julia. Having implemented some such solvers in the past, I can safely say this stuff is easily the best out there.

Chris — the author of this blog post and lead developer of the DifferentialEquations.jl ecosystem — had a really nice workshop at this year's JuliaCon. You can watch it online here: https://www.youtube.com/watch?v=75SCMIRlNXM. His intro there is really approachable and even if you just watch the first 10 minutes you can come away with something tangible.

I see those Bombay Sapphires have been working towards a good cause!

But in all seriousness, the comparison was far more thorough than I expected going in. I learned quite a bit, and it's making me consider giving Julia another shot. I've been tinkering with mostly Rust lately, very happily, but this post makes me miss something with the strong scientific bend of Julia. So job well done and thanks for taking the time to write this.

Intel has an ODE solver [1] that seems unbelievably good at solving certain kinds of differential equations. I'm surprised/disappointed it wasn't even mentioned. I would've loved to see an evaluation of it against MATLAB (or other solvers) more than any other pair in the list.

[1] https://software.intel.com/en-us/articles/intel-ordinary-dif...

Could you elaborate on what's "unbelievably good" about it? From a quick glance at the site and documentation I don't see any particularly distinguishing features.

Sure. Digging through my old emails, here's the particular example (a very stiff Van der Pol equation) I was looking at, taken from Intel's own examples, which for MATLAB would be:

  [t, x] = ode15s(
    @(t, x) [x(2); ((1 - x(1)^2) * x(2) - x(1)) * 10^6],
    [0, 160], [2; 0], odeset('RelTol', 1e-5));
  x(size(x, 1), :)
MATLAB takes 7.2 seconds to output the result [1.872, -0.748].

Intel's dodesol_rkm9mkn() gave me [1.877, -0.743], which is a more accurate result, in less than the blink of an eye: in 0.035 seconds.

That's more than 200x faster, by mere virtue of using a better algorithm. Apparently the algorithm is "Merson's method", which I'm completely unfamiliar with and which I'd never heard of until then.

I went ahead and ran some stuff on my laptop.

    using DifferentialEquations

    van = @ode_def_noinvhes VanDerPol2 begin
      dy = μ*((1-x^2)*y - x)
      dx = 1*y
    end μ=>1.

    prob = ODEProblem(VanDerPol2(μ=1e6),[0;2.],(0.0,160.0))

    @time sol1 = solve(prob,CVODE_BDF(),reltol=1/10^5,save_everystep=false)
    @time sol2 = solve(prob,Rodas5(),reltol=1/10^5,save_everystep=false)
    sol3 = solve(prob,CVODE_BDF(),abstol=1/10^10,reltol=1/10^10)
On my laptop I'm getting ~0.1 seconds for both, with Sundials being a little less accurate and OrdinaryDiffEq.jl being a little more accurate against the 10^-10 reference solution than the reported Intel. But "spot checking" like this makes it hard to just timings without issues of more/less error involved, which is why more sophisticated benchmarks use work-precision diagrams. So the timings are in the same ballpark as Intel and I'd assume my (not so fast) laptop doesn't match what you benchmarked with, the point being that MATLAB is just slow here instead of Intel being unbelievably exceptional (though admirable). Still, maybe Intel does pull off some extra efficiency here, but then of course no method is universally good so I wouldn't expect that everywhere. In the end, it's probably a well optimized (probably in the DiffEq.jl/Sundials/FATODE range) single method without event handling or other extra features, but importantly it is abandoned. Probably would've been interesting if development kept up, but without it I think I'm just not going to try to get it installed.

Thanks for pointing me to it though. It's definitely an interesting nugget of ODE history that I didn't know existed.

> Probably would've been interesting if development kept up, but without it I think I'm just not going to try to get it installed.

Out of curiosity, why is this an issue? Wouldn't you have just downloaded the library and run it regardless of whether someone is actively developing it or not? I don't really understand this argument (and I see many people making it, not just you).

Well vanilla ODE solvers are a dime a dozen. Sufficiently optimized stiff ODE solvers tend to get around the same efficiency, with some methods doing better on some problems, and others doing better on other problems. So it's all problem dependent and this is just one implicit RK method, while things like ARKODE already have quite a few, FATODE has quite a few, and DifferentialEquations.jl has quite a few, etc, meaning I wouldn't expect it to be too different.

What would make it exciting to play with if it added something. Free parallelism or uses the Xeon Phi automatically, complex number support, event handling (many examples where this is necessary. One: PDE solvers need this for adding projection steps). But since development has halted, we're pretty much guaranteed that none of this will ever come.

The fact that it's not licensed for redistribution or even any regular use is a pretty good reason to not futz with it.


> Intel grants to you a non-exclusive, non-assignable copyright license to make only the minimum number of copies of the Materials reasonably necessary for your internal testing and development of your products.

> You may not distribute any portion of Materials, whether in source or binary form, to any third party, except as specified in this Agreement.

I don't think this really backs up your statement though. I am not at all surprised by this result. In the post I mentioned that MATLAB wasn't really designed for efficiency, and in this repo I show how even with explicit Runge-Kutta methods (i.e. the exact same methods, and these methods are simple enough that there isn't too much variation in implementation details sans getting it all to SIMD/FMA/etc.) you can get orders of magnitude better than MATLAB:


So since stiff solvers have a lot more moving parts and thus tend to have even larger differences, 200x is easily explained by just language differences. One way to test this is to instead test the Intel solver against Sundials. I wouldn't be surprised if Sundials actually was faster in this case.

But there's another factor here. I explained in the blog post that multistep methods like BDF tend to do well on asymtopically large problems since they minimize the amount of user-function evaluations, but they don't tend to do well on smaller systems of ODEs. This is exactly that case: you're comparing an implicit RK (Intel) to MATLAB's BDF method, and see that the RK is more accurate and is fast. But this effect of the method choice on small (user-scale) problems is exactly why I emphasized it as a downside when solver suites only include multistep BDF methods as their stiff solvers.

That isn't to say there isn't interesting here. Intel's nonstiff method is a 4th order method that can add extra 1st order stages to boost the stability region, similar to a Runge-Kutta Chevyshev method. I honestly have no idea what this stiff solver is so I bookmarked it to read when I get to a campus where the paper is free:


But in general, RKC methods weren't widely adopted because usually using a full non-stiff or stiff solver ends of being better, and picking wildly different methods like this which are not widely known usually isn't a good sign in a very "crowded" field like ODE solvers where generally things are pretty figured out and it's all about the details.

> I don't think this really backs up your statement though.

I'm not sure which statement you're reading, but everything you said so far is in agreement with my statement:

>> Intel has an ODE solver that seems unbelievably good at solving certain kinds of differential equations. I'm surprised/disappointed it wasn't even mentioned. I would've loved to see an evaluation of it against MATLAB (or other solvers) more than any other pair in the list.

Note that I have specifically stated that this method seems better on (at least) certain kinds of equations, and that, therefore, I would have loved to see a comparison of it with everything else.

In particular, note that I never claimed the method Intel uses is better, at least (especially?) not in general. If I knew that to be the case, I wouldn't have even had a reason to hope for a comparison! The entire point of my comment was that I would be interested in an evaluation of its performance, and I'm glad to see that you feel similarly. :)

Gotta say, "long-abandoned implementation of some unclear method that was considerably faster than Matlab's known-to-be-slow implementation on some problems" does not sound particularly interesting. I suspect that Chris is only interested at all because he is spectacularly thorough (as evinced his comparisons with other ODE solvers and his code) and the method might not yet be included in DifferentialEquations.jl. There's little reason to believe that Intel was onto something "magical" here; it seems that they just had a good implementation of a decent method.

Finally got on campus to be able to read the paper in detail. It's an interesting method but it doesn't really seem to have a clear "best use case". It is a Rosenbrock-W (or ROW) type of method, which means that it's a Rosenbrock method where the order of accuracy is not dependent on having an "exact Jacobian". It then uses this fact to allow for "freezing", i.e. storing the same Jacobian across multiple steps. The reason this is nice is because then you can store the factorization and re-use it. (m,k) is with m being the number of linear solves, and k being the number of RHS function evaluations. The paper creates a bunch, and what Intel used was a (5,2) method.

However, the (5,2) method, in order to have an embedded method for adaptivity, has its coefficients (for the Rosenbrock tableau) almost completely determined. What this means is that the paper is not able to "optimize" the method for low error. Normally what's done is that a method is written in terms of tableaus, and the numbers for the tableau are chosen such that the principle terms in the error, the coefficients of the leading error terms, are minimized. Sufficiently good RK and Rosenbrock methods can get like 10^-8, meaning that for large enough dt they almost act like they are an order higher. Of course, the large amount of assumptions to get the given method means that this couldn't be done.

But does the algorithm have a place? This is the issue... it doesn't necessarily have a good fit anywhere. Standard Rosenbrock methods require a new Jacobian each timestep, but that's fine if factorization isn't sufficiently expensive when is true for smaller ODE systems. Also if the problem is "very nonlinear", i.e. not semi-stiff but solidly stiff, Jacobian re-usage doesn't help anyways. So standard coefficient-optimized Rosenbrock methods do really well when the equations are very stiff and the number of ODEs is small enough. (coefficient-optimized) Modern (E)SDIRK methods naturally allow for Jacobian re-usage but are otherwise similar to the Rosenbrock methods, making them do well in the small number of ODEs but semi-stiff case. The issue with this method is that it tries to sit in the middle here but it's not really optimized for either case.

But the issue really goes back to the 2 in the (5,2). Early on in ODE literature, people believed that reducing the number of steps was always a good idea. You can do 5th order RK methods in 7 steps (thus 7 f evaluations, where u'=f with f being the user given ODE function), so why would you ever do 8? However, in the early 90's when we finally got our hands on PCs, people found out that the assumption to keep stages minimized doesn't empirically do well. Instead, what you can do is have more stages, but then use that to design the method such that the local truncation error is orders of magnitude lower. The result is that you can take much larger steps in order to solve the equation but get the same error, making everything quicker. To compensate for the fact that you are now taking large steps you have to have "dense output", i.e. a continuous extension of the method which can use the internal stage data to turn the solution into a continuous function sol(t) which doesn't require any evaluations of f. While this is more cumbersome in to implement, these "higher stage with dense output" methods, the engineering is very much worth it and if you take a look at pretty much any production-quality method these days you'll see they've ended up with these Rosenbrock, (E)SDIRK, or FIRK methods... except for multistep methods.

Multistep methods though use history to achieve high order with just a single function evaluation in its implicit function per step. This is thus what's best when f is expensive, and is another reason why stage reduction isn't necessarily helpful because then you're ending up in an area where other methods are already pretty optimal. The classic is the BDF method. It can do the Jacobian re-usage as well. While these aren't L-stable above order 2, these can be adaptive order and do really well in mildly stiff situations.

So that said, there really isn't a good niche for these types of methods. I have other low-stage Rosenbrock tableaus implemented and it comes out pretty much as I say here: more error for the same amount of time as others. Rosenbrock-W (ROW) methods are not important anymore because with autodifferentiation we don't need to do numerical differentiation and thus we do get exact Jacobians easily, which negates the purpose of the ROW method (the reason why they were invented because of numerical errors with numerical differentiation causing order loss in Rosenbrock methods. The tradeoff is a lot more assumptions and a lot less coefficient optimization).

So the method Intel chose, extrapolating from other low-stage Rosenbrock methods and its other properties, would do best in a cases where the equation is not too nonlinear (you want to make use of the Jacobian freezing, otherwise there's no point since the standard Rosenbrock methods would simply get less error for the same steps), but also it's very stiff (otherwise you're in the regime where BDF dominates), but also not too stiff where you'd want to use the IMEX properties of the modern (E)SDIRK methods, and you'd also want f sufficiently costly that the higher-stage (E)SDIRK methods don't do better.

This puts it into a very very small niche, which is probably why this method was lost to the literature and not really adopted. But then again, since it does sit right in the middle of a lot of things, if you're to pick one method to implement, this is probably a pretty good choice. So while Intel probably did a very good implementation of it, I wouldn't see a reason to go looking for it somewhere else. If this method did well, then either a Rosenbrock, (E)SDIRK, or BDF method will probably do better depending on what regime it more falls into.

Wow! Thank you for the comprehensive review. Leaves me with nothing else to say but thanks :)

Do the initials stand for Runge-Kutta-Merson, order 9? I wonder about the other 3 (mkn)

I believe those are the initials, yes.

The "9" refers to the maximum number of "stages" which is not the same thing as the order of the solver.

The (m,k) refer to the corresponding parameters of the method (I think).

The "n" at the end is for "numerical" Jacobian (as opposed to "a" for user-provided/"analytical").

There should be a PDF in the download that explains these better than I possibly can (though it's still somewhat terse).

I had a look before asking but could not find anything before the download. Thanks, it piqued my curiosity (used to write & theoretically analyse several numerical integrators back in my teaching days)

Intel's ODE solver library appears to be pre-release software, retired in 2011, that was only licensed for "internal testing and development."

That's the ironic thing about it. At least for the examples I've tried (probably a biased subset, but still), it beats the crap out of the couple of commercial solvers I've tried.

Might it be somewhat related to the 'Neuromorphic Chip' recently announced by Intel - which is, as discussed here on HN[1], being especially good for approximating dynamic systems and to use symbolic computation?

[1] https://news.ycombinator.com/item?id=15334533

For a more "systems" approach for ODEs, you should look at Modelica: https://en.wikipedia.org/wiki/Modelica It's very powerful stuff.

Julia's meta-programming functionality is ideal for implementing such domain-specific languages (the popular JuMP suite is a flagship example in a different domain: https://jump.readthedocs.io/en/latest/).

Several groups are working on systems inspired by Modelica, including:



https://www.modelica.org/events/modelica2017/proceedings/htm... (https://github.com/ModiaSim/Modia.jl)

Can anyone recommend any introductory material on real-world examples of differential equations? I don't mean the first-year undergrad textbook stuff with differential equations for population growth and chemical reactions and mechanics that can be solved via separation of variables etc. I mean descriptions of models used in science and engineering that result in needing to use the fancy solvers that this thread is discussing.

Electronics is all about differential equations.

> Circuit simulation programs take a text netlist describing the circuit elements (transistors, resistors, capacitors, etc.) and their connections, and translate this description into equations to be solved. The general equations produced are nonlinear differential algebraic equations


All the way down to transistors and other basic elements

> These models are very computer intensive, involving detailed spatial and temporal solutions of coupled partial differential equations on three-dimensional grids inside the device


This seems to be all about ODEs. Has anyone got any recommendation for a good PDE solver? One that might run a problem having more than three spatial dimensions? (and ideally callable from Python)

Talking about PDEs is way more complex and I didn't get into it. What do you mean by a good PDE solver? Generally a PDE solver is built by doing some spatial discretization and then either solving that with some numerical linear solver directly (if it's a time-independent problem, or if it's a fully-implicit discretization), or using an ODE solver. There are many different spatial discretization techniques, including finite element, finite difference, finite volume, and spectral methods. There are libraries for helping one discretize space using each of these methods.

But then there are "full-stop PDE solvers" which choose a specific PDE and solve that. For example, MATLAB and Mathematica can numerically solve diffusion-advection systems directly. And there are many user packages for different PDEs.

So to cover this fully, I would need to go through the available tools for numerical linear algebra and spatial discretization, but on the other hand cover some full-stop PDE solvers as well. This will be a different blog post. Without knowing your background or what you want to do, FEniCS is a good Python finite element toolbox that can then be used to build the linear equation or system of ODEs, from which you use other tools like those mentioned here to complete the solving. As for full-stop PDE solvers in Python, I am not sure and need to look into it myself...

Thanks! My problem is based on a straightforward constant-coefficient diffusion-advection equation, but there are complicated boundaries and I'd like it to work in six or seven dimensions (given time and a big computer!). I have read a little about FEniCS but I think it will struggle just to generate the mesh. I'm looking at Mathematica now. It seems like it will treat an evolution equation specifically (method of lines)---perhaps that's what you mean by a "full-stop solver"?

Yup, that's exactly what I mean. People have pre-written solvers for specific equations. Sounds like you found one for you, so that's great! When you ask about PDE tools though, people start talking about tools for building PDE solvers like FEniCS becuase it's so rare to find something exactly for your PDE, but there's no reason to use these tools unless you have to.

There is no good general-purpose PDE solver, as there are all these considerations such as elliptic/parabolic/hyperbolic PDEs, accuracy/stability, FEM/FVM/FDM, linear/nonlinear, numerical oscillation, steady state/unsteady etc... But I do recommend FeniCS PDE solver [1]. It is fairly easy to use if you understand the finite element method and type of PDE you are solving for.

[1] https://fenicsproject.org/

There was a GSOC project on Julia wrappers for FeniCS.


FEniCS (https://fenicsproject.org/) is probably the best open source solution currently available and it works great from python. They even have a pretty good free book to getting started: https://fenicsproject.org/pub/tutorial/pdf/fenics-tutorial-v...

PDEs are hard to solve numerically, because you usually need to evaluate the derivatives. The errors are not smooth functions of the coordinates, so their derivatives tend to swamp the derivative of the solution. By contrast, ODE solving is about integrals, which add up the nearly equal quantities that derivatives subtract.

In this area of numerical computing, the important part is still the user, and the program they use is mostly a matter of what's convenient.

It's a pity there are no JavaScript options listed.

Wouldn't it be a single entry, numeric.js?

Yeah, I'm not sure of any others. But looking at numeric.js, it has a single ODE solver which is Dopri, and thus matches the tableau of ode45 and dopri5. However, it's easy to see that their implementation is very different just by this line:

    h = min(0.8*h*pow(tol/erinf,0.25),4*h);
That means that the timesteps are changes via Cechino's optimality theorem, the "standard" timestepping method which isn't actually used in production quality solvers. The reason is because this form of adaptivity causes solvers to change their h too rapidly, which actually decreases the stability region and increases the number of step rejections. So all of the other explicit RK solvers mentioned (except when I single it out) all use a PI-controller. And this makes a pretty big difference: I've found something like a 10x performance change by using a PI-controller on many equations, and then there's many other equations where at a "simple" tolerance like reltol=10^-3 the dopri-type method will diverge without a PI-controller on the timesteps. But it does have event handling and dense output. However, since it's lacking any solvers for stiff ODEs, I am not sure it really is a full suite.

I see no mention of multigrid methods.

Multigrid methods are linear solvers. Generally they are only not used in the context of timestepping. PDEs will be a separate post.

I'm so looking forward to this post!. hp-FEM methods, in particular, caught my attention, they look so promising. Do they work in practice?

Some time ago I was trying to make a 'hpq-FEM' using rational functions as a basis, like a Padé approximation. Do you know if anyone has tried something similar?

Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact