Maxwell makes an entire array of microelectronics components. These are generally meant for use in devices that are deployed into space environments.
There's an interesting (if somewhat technical) document here: http://www51.honeywell.com/aero/common/documents/myaerospace...
So this is to protect your orbital weapons platform from EMP blasts?
This sounds silly, but the vast majority of the cost is non-recurring engineering cost. Manufacturing it would be a relatively cheap matter of sending the design to a fab like TSMC along with a bundle of money. Transistors are dirt cheap.
And anyway, what exactly does "redundancy" mean? If a rocket engine controller is triple redundant, how does that work? Are there three propellant valves in parallel, so each computer controls one-third of the thrust? Are they in series, so that failure of one computer disables the propulsion function? Is there a majority vote system, and is it electronic, electromechanical, or fluidic? Redundancy is not pixie dust that magically makes your system design better.
A sample Google interview question is to design the protocols to run a cluster of unreliable computers. Should there be a MIL-SPEC master computer? Should the cluster elect a master? Or several oligarch servers? Where does an outside agent submit a request, and what does it do if the request is not answered. Designing reliable systems is hard.
For self destruct systems you probably want all three to agree before going bang - while for an emergency escape system you probably want any one of three to be sufficent to deplay.
I vaguely remember being taught that this is the big problem developing real-time safety critical systems.
Alternately, it's possible to use asynchronous processor design and not worry about clock distribution. The tools aren't really there, but there have been async processors made before, and they work. They handle synchronization with local handshaking, instead of distributing a clock signal everywhere.
Another option is to abandon the cycle-for-cycle lockstep requirements, and just ensure that the synchronization time is bounded, and reasonably low. I know there have been some papers published about using this kind of globally-asynchronous-locally-synchronous architecture for realtime apps.
After Ariane 5 crashed spectacularly due to a software error that affected the two on board computers and the ground control unit likewise (http://en.wikipedia.org/wiki/Ariane_5_Flight_501), there had been talk about having the same software be developed by multiple, independent teams, and then use the different versions for error correction. Sounds like a crazy idea and probably won't work, but I don't really know of a better solution either.
Of course, it's useless if the specification is wrong, and the assumption that the differing versions will fail in different ways seems to not hold water.
(But I don't think I need the cavity search every time I enter the US that might come along with that...)