Hacker News new | past | comments | ask | show | jobs | submit login

> tests that run every single build even when nothing checked by them has changed

Oh yes, because we can always be sure that a change in one place will never cause a test in another place to fail.




Not when it's referentially transparent. In that case, there's no need to run the tests if a function or its children haven't changed. Of course, in that's the case, the tests are probably fast enough that running then won't matter much.


If you're working with pure functions, you can always be sure of this.


Not on real hardware.


Are hardware-level bit flips actually a part of your threat model?


That's not the only kind of way in which real hardware falls short of the theoretical. I've been burned by this at least a half dozen times in my career.


Unironically yes. Memory and CPU errors happen.


Not to mention things like caching and consistency bugs, either in hardware (e.g. CPU errata) or the software stack which is implementing the abstraction.

Or stuff like reordering a piece of code which should be a no-op, but causes two expensive operations to be run simultaneously on the same die at runtime, resulting in a CPU under voltage event which triggers a reboot. I ran into this exact problem about 6 months ago. We could argue about whether the CPU, motherboard, or power supply vendor was out of spec, but in the real world with deployed hardware that doesn't matter. You revert and run the code that doesn't trigger a reboots during high work loads.

Having a test suite catch this before you deploy to production is a good thing.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: