Exactly what order and degree were you using to evaluate the model? Variations in drag and solar pressure are more significant than the uncertainty in the gravity field for objects in LEO somewhere much less than 127th order (40 microseconds on my machine, your smileage may vary), so you can safely truncate the model for simulations. GRACE worked by making many passes such that they could average out those perturbations to make their measurement. But for practical applications, those tiny terms are irrelevant.
Always super satisfying to take a 1.2s calculation and make it orders of magnitude faster. Recently I had a complicated calculation mostly done in SQLite (with some C callbacks to do core floating point ops) that was taking 1.5s; rewrote it into a hand-crafted incremental computation network and got the calculation down to 6ms.
Do I understand right, that it's summing ~2 million spherical harmonic terms, and the optimization is that that's accomplished in ~200 ms (per test point?)? Does this force field feed into something like a Runge-Kutta solver for predicting orbits? (I don't know much about orbital mechanics).
Yup, your understanding is correct. When set to full fidelity, it computes and sums 2,331,720 terms. What we optimized is the way the spherical harmonics are generated, which allows for the generation and summation to happen in less than 250 ms. Once that force information is generated, it is passed to a semi-implicit integrator – we also support RK4, but this test was using semi-implicit. That allows you to run orbital dynamics simulations; our primary use case is for testing satellite control systems.
It's a pretty minor point, but the pedant in me appreciates it nonetheless!
It's also true that since most people are not subjected to the horrors of learning about PDEs, the place where they may be exposed to spherical harmonics is in atomic orbitals from high school chemistry, so I could see where you were coming from.
What's outlined in the article has, from my PoV at least, been true since 1980 at least (and older coders will likely in with earlier tales): handy implementation and simulation libraries generally work and provide good reference results to check against but almost always you can get a magnitude order faster if you can put the time in.
I put the earlier pre-2008 epoch models for earths gravity and magnetics through a similar workout sometime ago, for our use case we spent some time and money to make a custom TI DSP chip pipeline to get the turn around that made data exploration playful rather than an overnight grind.
Thank you for giving a plausible explanation as to why the main content of the website doesn’t load, but the rest does! Looking at my NextDNS logs, the website does indeed seem to have plenty of telemetry. I’ll keep it blocked.
Sorry to hear you had to block the site - Do you have issues loading other webflow sites because of NextDNS? I've found many threads on issues between them - but not much in the way of a solution.
I've never seen anyone use Barnes-Hut for an earth gravity model. I'd be curious to see how an implementation like that would work. I think Barnes-Hut is typically used for large N-body simulations where each particle moves independently, like for galactic simulations.
One of the biggest barriers to alternate geopotential models is the availability of trusted data-sets. In another comment someone linked to a PINN based method that looks super promising
LUTs are commonly used in geodesy applications on or near the Earth's surface. The full multipole model is used for orbital applications to account for the way that local lumpiness in Earth's mass distribution is smoothed out with increasing distance from the surface. It might be reasonable to build a 3D LUT for use at Starlink scale or higher, but certainly not for individual satellites.
TL;DR: JAX (the GPU-accelerated Python ML framework)
Pretty cool to see applications of all this good stuff developed for ML in other areas! I guess a side effect is that the calculations should now be differentiable and you could optimize various parameters using gradient descent if you wanted. You could even use it as a component of a neural net and backpropagate gradients through it.
Also it's not clear if they used GPUs but they probably could for an even better speedup in a scenario with large batches of calculations.
No offense but this doesn't seem "revolutionary" at all / people have been doing hierarchical representations and multipole expansions since forever. The presentation of a single fixed-resolution bitmap of dense equations with no introduction of terms feels like it's trying to do proof by intimidation.
When doing path guiding in Monte Carlo path tracing you often have lots of nasty low probability / high contribution paths that make gravitational simulation seem easy. After all, if you can a-priori simulate light paths efficiently, then you can efficiently solve optical computers / the Halting Problem.
IERS Technical Note 36 section 6.1 gives recommendations for model truncation if you are looking for justification. https://iers-conventions.obspm.fr/content/tn36.pdf