 Kepler's Goat Herd: An Exact Solution for Elliptical Orbit Evolution 57 points by mkoc 13 days ago | hide | past | favorite | 12 comments I wonder how it happened that the inner loop here (https://github.com/oliverphilcox/Keplers-Goat-Herd/blob/3a0b...) with N_it=5 is 2 times slower than the inner loop here (https://github.com/oliverphilcox/Keplers-Goat-Herd/blob/3a0b...) with N_it=18. It doesn't look two times faster at all, and I've spent a lot of time optimizing numerical code. Is it possible that the compiler managed to vectorize the faster loop but not the slower one, or something like that? Or is it that specifically the divisions are too many and too expensive? Or the N_it-1 extra evaluations of sincos? I see a lot of sin's and cos's in both of those, and SSE and AVX don't have either of these operations for vectors. So I doubt it is that. This is pretty notable isn't it? This is the first exact, non-series solution (?) of a very famous equation from orbital mechanics, that was first posed 400+ years ago.https://en.wikipedia.org/wiki/Kepler%27s_equationThe work this builds on was discussed here (121 comments): In numerical code the distinction between exact closed-form solutions and approximate solutions (through Newton's method and the like) is just not that meaningful. What determines speed is how much arithmetic you have to do, and it's quite possible to have a closed-form solution that is too cumbersome to evaluate compared to an approximate method. After all, approximate methods only have to converge to tolerance, nobody cares about exactness itself. Yes, well put. I'd take it further - in numerical code there is no real distinction /at all/ between closed form vs iterative solutions. Every type of numerical code is subject to truncation error; an iterative method with few moving parts may systematically converge to a more accurate result than the direct expression of a closed-form solution.It's quite interesting to delve into how special functions like `sin` and the like are actually implemented and the lengths people go to to make them "correctly rounded" (see, for example, crlibm). Even something as simple as linear interpolation between two floating point endpoints can be quite subtle if you want an implementation that is exactly correctly rounded and also fast. > I'd take it further - in numerical code there is no real distinction /at all/ between closed form vs iterative solutions.This assertion is however wrong. Closed form solutions at worse accumulate rounding errors of a single expression, while iterative solutions not only pile on rounding errors throughout the iterations but they also have to stop by truncating the rest of the solution, leading to results which not only is approximate but also amplifies rounding errors. This is false, and your description of how to analyze/compare errors of iterative algorithms is also wrong. In short, if you have an algorithm with error n*eps compared to m*eps+tol, there's no a priori reason why one would be greater/equal to/smaller than another, it's just an overgeneralization (because you don't know yet what m,n,tol are). Focussing on truncating the rest of the solution is also wrong because that makes you think of only some specific types of iterative algorithms, as if they all converge linearly/sublinearly or something. In this particular paper the closed form is given as a ratio of two integrals, and both integrals are evaluated approximately using a perfectly reasonable quadrature rule, suitably chosen, and that quadrature rule has a truncation error, and that error is small enough not to matter. Computational complexity seems like a useful frame for thinking about such distinctions. If you can calculate the state of a system at time t in O(log t), that's a lot like having a closed-form solution. On the other hand if the calculation takes O(t) time, then it seems much less like a closed-form solution and more like a step-by-step simulation. This is something I wish they hammered into you more in school. Took a while for me to realize first you have to understand how good something needs to be, and then just do it at least that well. > In numerical code the distinction between exact closed-form solutions and approximate solutions (through Newton's method and the like) is just not that meaningful.Not that meaningful indeed, until, as the abstract states, it outperforms existing series and root-finding solutions by a factor of at least 2. QED. QED. Nice to see reproducible research getting more and more common. The sources are at https://github.com/oliverphilcox/Keplers-Goat-HerdThe link is also on the arxiv page but a bit hidden. Eccentric anomaly: https://en.m.wikipedia.org/wiki/Eccentric_anomaly Search: