Hacker News new | comments | show | ask | jobs | submit login

Doesn't the difficulty/expense of keeping all those processors running in absolute cycle-for-cycle lockstep increase dramatically with the amount of redundancy?

I vaguely remember being taught that this is the big problem developing real-time safety critical systems.




If you make a bunch of (highly redundant) small processors, then I don't see why it would be much harder than the clock distribution issues in large processors, which also need to keep all their parts in sync.

Alternately, it's possible to use asynchronous processor design and not worry about clock distribution. The tools aren't really there, but there have been async processors made before, and they work. They handle synchronization with local handshaking, instead of distributing a clock signal everywhere.

Another option is to abandon the cycle-for-cycle lockstep requirements, and just ensure that the synchronization time is bounded, and reasonably low. I know there have been some papers published about using this kind of globally-asynchronous-locally-synchronous architecture for realtime apps.


The problem is when there is an error, and there will be, you need to correct the processor, unit or other part of the circuit which is now in the wrong state.


It could be that I just don't know enough about redundant system design, but I'm pretty sure the way Voyager worked was each computer ran independently and identically, and the result of computations was simply compared to the result on the other computers. In other words, you run it like Folding@Home or SETI@Home which send each job to multiple clients. That doesn't seem like a difficult problem to tackle.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: