I work in HPC, and while security isn't an issue for your typical simulation code, correctness certainly is. Spending a million CPU hours on a supercomputer computing junk because memory unsafely caused the simulation to corrupt itself, and then writing a paper publishing those results isn't good.
Many times when I've helped some researcher make their code run on a cluster I have discovered that the code crashes at runtime if bounds checking is enabled. The usual response is that "this can't be a problem because we've (or someone else) published papers with results computed with this program". Sorry sunshine, this isn't how it works. Maybe the corruption is entirely benign, but how can you tell?
Many times when I've helped some researcher make their code run on a cluster I have discovered that the code crashes at runtime if bounds checking is enabled. The usual response is that "this can't be a problem because we've (or someone else) published papers with results computed with this program". Sorry sunshine, this isn't how it works. Maybe the corruption is entirely benign, but how can you tell?