Hacker News new | past | comments | ask | show | jobs | submit login

When I was fresh out of college, I worked as a contractor for a prominent agricultural equipment manufacturer. I was responsible for building out the touch-screen interface for the radio (a Qt app). I was told by an engineer who worked for the equipment manufacturer that my application wasn't good enough because needed to be able to operate correctly in the face of arbitrary bit flips "from lightning strikes"--I kindly asked her to show me the requirements which was sufficient to get her to relent, but that was still the wackiest request I've ever received.



The requirement request is a great way to push back on feature creep. There's a lot of cargo culting that goes on in the "protection against bit-flips". You sometimes have to go a step further and ask what error rate are you required to be below. Once you have that number, you can start asking what your current error rate is without mitigations, and how much a given mitigation will reduce your error rate.


My favorite entry in that problem space is metastability[1].

Do you interface two different clock domains(which is basically most things)? Guess what all of your computing is built on the "chance" that bits won't flip.

Granted, statistics make this pretty solid but kinda blew my mind when I first stumbled across it.

[1] https://en.wikipedia.org/wiki/Metastability_(electronics)


Yup, a large portion of hardware design is based on getting below a required maximum failure rate. For metastability, you just keep adding more flip-flops. BTW, the cache invalidation request may be due to this. They figured they could more easily reach their time between failure interval if they could discount time during S1.


Hurray digital signal synchronization!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: