Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The biggest factor working to their advantage was that the tech back then was much simpler and more robust. To get todays tech to work for a decade without interruption would be a very tall order. Layer upon layer of abstraction has made it impossible to know for sure that there are no edge cases that will only trigger once every 3 years or so.


The physical hardware for the computers of 70's probes had more parts and complexity. Voyager used magnetic tape recorders, for example. Newer tech has allowed for simpler computer hardware, but does shift problems into the realm of software and file system management.

Both Spirit (Mars) and New Horizons (Pluto) had down days as issues with file system management puzzled the IT staff.

The New Horizons case was a crazy mad-scramble, as the probe was scheduled to pass by Pluto in a few days whether the probe was working or not. There was no re-do. Dozens of choice careers were in the balance.

Probe chips are still not very powerful by today's standards because they are designed to work in the harsh conditions of space. Thus, they are more comparable to a 1980's PC, and may mostly stay that way, being smaller components don't handle radiation well.

I read somewhere it's estimated that even surface-reaching cosmic radiation fouls up the typical desktop PC roughly once a year. Most just grumble at Microsoft and reboot.


I sort-of agree, although actually there is a lot of older stuff still made on fabs with huge (transistor) feature sizes and limited (software) feature sets that could be used to build extremely reliable long-term systems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: