> I find it amazing that nearly 60 years later, we are still dealing with the same issues in machine learning and high-performance computing: from single vs. double precision and numerical stability down to, well, using Kalman filters - and yes, FORTRAN and specialized hardware. And projects running behind schedule.
On the other hand, I don't find this amazing at all. Numerics are fundamentally difficult, the people who worked on them at the beginning were very smart. Projects have always been behind schedule. It doesn't surprise me at all that we've been stuck on at least a local maxima on many of these things....
The amazing part to me was that people had reached those maxima so fast: by 1960, barely after the computers have been invented, and with such limited resources.
Perhaps think about it this way: precisely because we had such limited resources at the time, smart people were highly motivated to find those (local?) maxima. These days in many contexts it is easier to say "eh, good enough" and move on to the next thing.
To add to this topic: the fast hardware adder, patented by IBM in 1957, now in some forms present in all fast CPUs, was already invented by Charles Babbage between 1820 and 1830, before the middle of 19th (!) century as he designed his mechanical computing "difference" engine:
"The first idea was, naturally, to add each digit successively. This, however, would occupy much time if the numbers added together consisted of many places of figures.
The next step was to add all the digits of the two numbers each to each at the same instant, but reserving a certain mechanical memorandum, wherever a carriage became due. These carriages were then to be executed successively. "
On the other hand, I don't find this amazing at all. Numerics are fundamentally difficult, the people who worked on them at the beginning were very smart. Projects have always been behind schedule. It doesn't surprise me at all that we've been stuck on at least a local maxima on many of these things....