
Math Invented for Moon Landing Helps Your Flight Arrive on Time - wglb
https://www.nasa.gov/feature/ames/math-invented-for-moon-landing-helps-your-flight-arrive-on-time
======
Isamu
About the significance of the Schmidt-Kalman filter (from a related abstract):

>In most target tracking formulations, the tracking sensor location is
typically assumed perfectly known. Without accounting for navigation errors of
the sensor platform, regular Kalman filters tend to be optimistic (i.e., the
covariance matrix far below the actual mean squared errors) ... The Schmidt-
Kalman filter (SKF) does not estimate the navigation errors explicitly but
rather takes into account the navigation error covariance provided by an on-
board navigation unit in the tracking filter formulation. By exploring the
structural navigation errors, the SKF is not only more consistent but also
produces smaller mean squared errors than regular Kalman filters.

------
romwell
Kalman filters are useful for much more than that :)

I'm barely acquainted with them, but this[1] seems to be a good introduction
that doesn't shove math under the rug.

Schmidt's modification of Kalman filter allowed for a practical implementation
of it on the Apollo computer. The history and the motivation for the changes
are excellently described in a survey paper co-authored by Schmidt himself in
1985[2]:

>Dr. Kalman's original formulation would have required an on-board crew to
make a continuous sequence of optical measurements equally spaced in time
throughout the lunar mission, an impractical scenario. Therefore, to implement
our measurement and course-correction schedule, the original formulation had
to be revised.

This paper, [2], offers amazing historical tidibtis (especially relevant on
HN): computational challenges, project running 6 months behind schedule, why
it was better than least-squares methods (computation time!), numerical
stability, implementation details that allowed for use of floats (single
precision) vs. double precision, FORTRAN implementation vs. specialized
hardware, etc.

I find it amazing that nearly 60 years later, we are still dealing with the
same issues in machine learning and high-performance computing: from single
vs. double precision and numerical stability down to, well, using Kalman
filters - and yes, FORTRAN and specialized hardware. And projects running
behind schedule.

(Last time I had to touch FORTRAN code was.. a month ago. Also guess what
powers all your shiny NumPy/SciPy computations: a lot of FORTRAN.)

[1][https://towardsdatascience.com/kalman-filter-an-algorithm-
fo...](https://towardsdatascience.com/kalman-filter-an-algorithm-for-making-
sense-from-the-insights-of-various-sensors-fused-together-ddf67597f35e)

[2][https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/198600...](https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19860003843.pdf)

~~~
ska
> I find it amazing that nearly 60 years later, we are still dealing with the
> same issues in machine learning and high-performance computing: from single
> vs. double precision and numerical stability down to, well, using Kalman
> filters - and yes, FORTRAN and specialized hardware. And projects running
> behind schedule.

On the other hand, I don't find this amazing at all. Numerics are
fundamentally difficult, the people who worked on them at the beginning were
very smart. Projects have always been behind schedule. It doesn't surprise me
at all that we've been stuck on at least a local maxima on many of these
things....

~~~
romwell
The amazing part to me was that people had reached those maxima so fast: by
1960, barely after the computers have been invented, and with such limited
resources.

~~~
ska
Perhaps think about it this way: precisely because we had such limited
resources at the time, smart people were highly motivated to find those
(local?) maxima. These days in many contexts it is easier to say "eh, good
enough" and move on to the next thing.

~~~
acqq
To add to this topic: the fast hardware adder, patented by IBM in 1957, now in
some forms present in all fast CPUs, was already invented by Charles Babbage
between 1820 and 1830, before the middle of 19th (!) century as he designed
his _mechanical_ computing "difference" engine:

[https://en.wikipedia.org/wiki/Carry-
lookahead_adder](https://en.wikipedia.org/wiki/Carry-lookahead_adder)

Apparently, he was proud of that invention, only almost nobody was able to
appreciate it then.

From "Passages from the Life of a Philosopher (1864) by Charles Babbage"

[https://en.wikisource.org/wiki/Passages_from_the_Life_of_a_P...](https://en.wikisource.org/wiki/Passages_from_the_Life_of_a_Philosopher/Chapter_V)

"The first idea was, naturally, to add each digit successively. This, however,
would occupy much time if the numbers added together consisted of many places
of figures.

The next step was to add all the digits of the two numbers each to each at the
same instant, but reserving a certain mechanical memorandum, wherever a
carriage became due. These carriages were then to be executed successively. "

A 3D model:

[https://www.youtube.com/watch?v=B2EDE8Srdcw](https://www.youtube.com/watch?v=B2EDE8Srdcw)

by the author of:

[https://en.wikipedia.org/wiki/The_Thrilling_Adventures_of_Lo...](https://en.wikipedia.org/wiki/The_Thrilling_Adventures_of_Lovelace_and_Babbage)

------
acqq
The paper by McGee and Schmidt (1985):

[https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/198600...](https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19860003843.pdf)

"Discovery of the Kalman Filter as a Practical Tool for Aerospace and Industry
-- Leonard A. McGee and Stanley F. Schmidt"

------
salty_biscuits
Are they just talking about the extended kalman filter? If so it is funny that
there was such a big conceptual gap between the linear and linearized versions
(i.e. a few years gap). Hindsight is a wonderful thing.

~~~
6gvONxR4sf7o
Nope, as far as I can tell, the schmidt-kalman filter is a way of reducing
dimensionality in a KF. You split the states between those you are interested
in and those you aren't. Sometimes those states you don't care about (e.g.
sensor biases) are still important for estimating those you do care about
(e.g. position). Adding a state to a typical KF for each of these dimensions
is one solution, but it's expensive (covariance calcs scale quadratically with
dimension). The Schmidt-Kalman filter is another solution.

